Test Report: Docker_Linux_containerd 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-19:35410
                    
                

Test fail (1/336)

Order failed test Duration
39 TestAddons/parallel/Ingress 91.99
x
+
TestAddons/parallel/Ingress (91.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-636193 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:209: (dbg) Non-zero exit: kubectl --context addons-636193 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (1m30.054440959s)

                                                
                                                
** stderr ** 
	error: timed out waiting for the condition on pods/ingress-nginx-controller-6d9bd977d4-9rsfl

                                                
                                                
** /stderr **
addons_test.go:210: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-636193
helpers_test.go:235: (dbg) docker inspect addons-636193:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "175cb76eb52460c7481b66a45bf413410314c00b0e4d0b4e819ed2d85af63d95",
	        "Created": "2024-07-19T03:27:15.38234229Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14568,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-19T03:27:15.506446827Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7bda27423b38cbebec7632cdf15a8fcb063ff209d17af249e6b3f1fbdb5fa681",
	        "ResolvConfPath": "/var/lib/docker/containers/175cb76eb52460c7481b66a45bf413410314c00b0e4d0b4e819ed2d85af63d95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/175cb76eb52460c7481b66a45bf413410314c00b0e4d0b4e819ed2d85af63d95/hostname",
	        "HostsPath": "/var/lib/docker/containers/175cb76eb52460c7481b66a45bf413410314c00b0e4d0b4e819ed2d85af63d95/hosts",
	        "LogPath": "/var/lib/docker/containers/175cb76eb52460c7481b66a45bf413410314c00b0e4d0b4e819ed2d85af63d95/175cb76eb52460c7481b66a45bf413410314c00b0e4d0b4e819ed2d85af63d95-json.log",
	        "Name": "/addons-636193",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-636193:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-636193",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9fe7d0cbe0296d1f1bed59b9bc8cef9046b9c80f0a8143cc0e4259319c3aeee8-init/diff:/var/lib/docker/overlay2/d506237cfc4bb2d18a781c86f2a74f30c7b88259564a9a1e628c458ac2bc7c8c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fe7d0cbe0296d1f1bed59b9bc8cef9046b9c80f0a8143cc0e4259319c3aeee8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fe7d0cbe0296d1f1bed59b9bc8cef9046b9c80f0a8143cc0e4259319c3aeee8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fe7d0cbe0296d1f1bed59b9bc8cef9046b9c80f0a8143cc0e4259319c3aeee8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-636193",
	                "Source": "/var/lib/docker/volumes/addons-636193/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-636193",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-636193",
	                "name.minikube.sigs.k8s.io": "addons-636193",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "868e4a56939a7d3bb3ab5def6be9650ff1bd1c375ac1bf7264b33567851dbda7",
	            "SandboxKey": "/var/run/docker/netns/868e4a56939a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-636193": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f15c3c4fcb02527ff9070ee1fd8ec389de19ec782c1a61671f414f28144f41dc",
	                    "EndpointID": "4ffb28d2b6d86922ca2f8a4d32bc10f8c250bac67dbf81573ea86739965651e2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-636193",
	                        "175cb76eb524"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-636193 -n addons-636193
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-636193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-636193 logs -n 25: (1.108913378s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| delete  | -p download-only-548503                                                                     | download-only-548503   | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| delete  | -p download-only-972036                                                                     | download-only-972036   | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| delete  | -p download-only-601538                                                                     | download-only-601538   | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| delete  | -p download-only-548503                                                                     | download-only-548503   | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| start   | --download-only -p                                                                          | download-docker-927594 | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |                     |
	|         | download-docker-927594                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-927594                                                                   | download-docker-927594 | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-415059   | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |                     |
	|         | binary-mirror-415059                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41655                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-415059                                                                     | binary-mirror-415059   | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| addons  | enable dashboard -p                                                                         | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |                     |
	|         | addons-636193                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |                     |
	|         | addons-636193                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-636193 --wait=true                                                                | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-636193 addons                                                                        | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-636193 addons disable                                                                | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-636193 ip                                                                            | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	| addons  | addons-636193 addons disable                                                                | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	|         | addons-636193                                                                               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	|         | -p addons-636193                                                                            |                        |         |         |                     |                     |
	| addons  | addons-636193 addons disable                                                                | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	|         | addons-636193                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	|         | -p addons-636193                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-636193 ssh cat                                                                       | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	|         | /opt/local-path-provisioner/pvc-b1bb7e8e-cb3c-4dfa-bcf4-226a66c3e989_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-636193 addons disable                                                                | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:34 UTC | 19 Jul 24 03:34 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-636193 addons                                                                        | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-636193 addons                                                                        | addons-636193          | jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:26:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:26:54.125809   13827 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:26:54.126062   13827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:26:54.126071   13827 out.go:304] Setting ErrFile to fd 2...
	I0719 03:26:54.126075   13827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:26:54.126273   13827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	I0719 03:26:54.127027   13827 out.go:298] Setting JSON to false
	I0719 03:26:54.127891   13827 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":558,"bootTime":1721359056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 03:26:54.127953   13827 start.go:139] virtualization: kvm guest
	I0719 03:26:54.130298   13827 out.go:177] * [addons-636193] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 03:26:54.131798   13827 notify.go:220] Checking for updates...
	I0719 03:26:54.131803   13827 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:26:54.133212   13827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:26:54.134568   13827 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	I0719 03:26:54.135929   13827 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	I0719 03:26:54.137199   13827 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 03:26:54.138393   13827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:26:54.139782   13827 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:26:54.160565   13827 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 03:26:54.160691   13827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:26:54.205114   13827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-19 03:26:54.19666393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:26:54.205206   13827 docker.go:307] overlay module found
	I0719 03:26:54.207150   13827 out.go:177] * Using the docker driver based on user configuration
	I0719 03:26:54.208453   13827 start.go:297] selected driver: docker
	I0719 03:26:54.208467   13827 start.go:901] validating driver "docker" against <nil>
	I0719 03:26:54.208481   13827 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:26:54.209156   13827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:26:54.256045   13827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-19 03:26:54.247745769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:26:54.256213   13827 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:26:54.256406   13827 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 03:26:54.258198   13827 out.go:177] * Using Docker driver with root privileges
	I0719 03:26:54.259647   13827 cni.go:84] Creating CNI manager for ""
	I0719 03:26:54.259666   13827 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0719 03:26:54.259677   13827 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 03:26:54.259742   13827 start.go:340] cluster config:
	{Name:addons-636193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-636193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:26:54.261268   13827 out.go:177] * Starting "addons-636193" primary control-plane node in "addons-636193" cluster
	I0719 03:26:54.262479   13827 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0719 03:26:54.263887   13827 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0719 03:26:54.265105   13827 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0719 03:26:54.265157   13827 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-5122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4
	I0719 03:26:54.265171   13827 cache.go:56] Caching tarball of preloaded images
	I0719 03:26:54.265190   13827 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 03:26:54.265283   13827 preload.go:172] Found /home/jenkins/minikube-integration/19302-5122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 03:26:54.265298   13827 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on containerd
	I0719 03:26:54.265647   13827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/config.json ...
	I0719 03:26:54.265673   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/config.json: {Name:mk793285f70197d6b834a5568333bd29ebe4ec11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:26:54.280890   13827 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 03:26:54.280998   13827 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 03:26:54.281013   13827 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 03:26:54.281017   13827 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 03:26:54.281024   13827 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 03:26:54.281030   13827 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0719 03:27:06.493964   13827 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0719 03:27:06.494014   13827 cache.go:194] Successfully downloaded all kic artifacts
	I0719 03:27:06.494059   13827 start.go:360] acquireMachinesLock for addons-636193: {Name:mk062cf5fbfcf594055865c74ade3d0a5d556e85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:27:06.494187   13827 start.go:364] duration metric: took 103.036µs to acquireMachinesLock for "addons-636193"
	I0719 03:27:06.494218   13827 start.go:93] Provisioning new machine with config: &{Name:addons-636193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-636193 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0719 03:27:06.494339   13827 start.go:125] createHost starting for "" (driver="docker")
	I0719 03:27:06.496484   13827 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0719 03:27:06.496709   13827 start.go:159] libmachine.API.Create for "addons-636193" (driver="docker")
	I0719 03:27:06.496740   13827 client.go:168] LocalClient.Create starting
	I0719 03:27:06.496853   13827 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19302-5122/.minikube/certs/ca.pem
	I0719 03:27:06.613283   13827 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19302-5122/.minikube/certs/cert.pem
	I0719 03:27:06.694701   13827 cli_runner.go:164] Run: docker network inspect addons-636193 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0719 03:27:06.710681   13827 cli_runner.go:211] docker network inspect addons-636193 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0719 03:27:06.710744   13827 network_create.go:284] running [docker network inspect addons-636193] to gather additional debugging logs...
	I0719 03:27:06.710759   13827 cli_runner.go:164] Run: docker network inspect addons-636193
	W0719 03:27:06.725627   13827 cli_runner.go:211] docker network inspect addons-636193 returned with exit code 1
	I0719 03:27:06.725652   13827 network_create.go:287] error running [docker network inspect addons-636193]: docker network inspect addons-636193: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-636193 not found
	I0719 03:27:06.725662   13827 network_create.go:289] output of [docker network inspect addons-636193]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-636193 not found
	
	** /stderr **
	I0719 03:27:06.725745   13827 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0719 03:27:06.740934   13827 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eae260}
	I0719 03:27:06.740977   13827 network_create.go:124] attempt to create docker network addons-636193 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0719 03:27:06.741025   13827 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-636193 addons-636193
	I0719 03:27:06.802150   13827 network_create.go:108] docker network addons-636193 192.168.49.0/24 created
	I0719 03:27:06.802181   13827 kic.go:121] calculated static IP "192.168.49.2" for the "addons-636193" container
	I0719 03:27:06.802248   13827 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0719 03:27:06.817361   13827 cli_runner.go:164] Run: docker volume create addons-636193 --label name.minikube.sigs.k8s.io=addons-636193 --label created_by.minikube.sigs.k8s.io=true
	I0719 03:27:06.833683   13827 oci.go:103] Successfully created a docker volume addons-636193
	I0719 03:27:06.833753   13827 cli_runner.go:164] Run: docker run --rm --name addons-636193-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-636193 --entrypoint /usr/bin/test -v addons-636193:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0719 03:27:10.762464   13827 cli_runner.go:217] Completed: docker run --rm --name addons-636193-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-636193 --entrypoint /usr/bin/test -v addons-636193:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib: (3.928677175s)
	I0719 03:27:10.762490   13827 oci.go:107] Successfully prepared a docker volume addons-636193
	I0719 03:27:10.762511   13827 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0719 03:27:10.762531   13827 kic.go:194] Starting extracting preloaded images to volume ...
	I0719 03:27:10.762601   13827 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19302-5122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-636193:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0719 03:27:15.321323   13827 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19302-5122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-636193:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir: (4.558684505s)
	I0719 03:27:15.321352   13827 kic.go:203] duration metric: took 4.55881708s to extract preloaded images to volume ...
	W0719 03:27:15.321494   13827 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0719 03:27:15.321587   13827 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0719 03:27:15.368244   13827 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-636193 --name addons-636193 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-636193 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-636193 --network addons-636193 --ip 192.168.49.2 --volume addons-636193:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f
	I0719 03:27:15.669068   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Running}}
	I0719 03:27:15.685882   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:15.703199   13827 cli_runner.go:164] Run: docker exec addons-636193 stat /var/lib/dpkg/alternatives/iptables
	I0719 03:27:15.746417   13827 oci.go:144] the created container "addons-636193" has a running status.
	I0719 03:27:15.746471   13827 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa...
	I0719 03:27:15.974987   13827 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0719 03:27:16.000078   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:16.016680   13827 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0719 03:27:16.016699   13827 kic_runner.go:114] Args: [docker exec --privileged addons-636193 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0719 03:27:16.064174   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:16.082386   13827 machine.go:94] provisionDockerMachine start ...
	I0719 03:27:16.082486   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:16.106796   13827 main.go:141] libmachine: Using SSH client type: native
	I0719 03:27:16.107056   13827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0719 03:27:16.107077   13827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 03:27:16.253884   13827 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-636193
	
	I0719 03:27:16.253915   13827 ubuntu.go:169] provisioning hostname "addons-636193"
	I0719 03:27:16.253981   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:16.274084   13827 main.go:141] libmachine: Using SSH client type: native
	I0719 03:27:16.274250   13827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0719 03:27:16.274263   13827 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-636193 && echo "addons-636193" | sudo tee /etc/hostname
	I0719 03:27:16.396853   13827 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-636193
	
	I0719 03:27:16.396942   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:16.413399   13827 main.go:141] libmachine: Using SSH client type: native
	I0719 03:27:16.413565   13827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0719 03:27:16.413581   13827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-636193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-636193/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-636193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 03:27:16.522383   13827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:27:16.522407   13827 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19302-5122/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-5122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-5122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-5122/.minikube}
	I0719 03:27:16.522438   13827 ubuntu.go:177] setting up certificates
	I0719 03:27:16.522449   13827 provision.go:84] configureAuth start
	I0719 03:27:16.522502   13827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-636193
	I0719 03:27:16.539735   13827 provision.go:143] copyHostCerts
	I0719 03:27:16.539804   13827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-5122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-5122/.minikube/cert.pem (1123 bytes)
	I0719 03:27:16.539931   13827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-5122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-5122/.minikube/key.pem (1675 bytes)
	I0719 03:27:16.540006   13827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-5122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-5122/.minikube/ca.pem (1082 bytes)
	I0719 03:27:16.540074   13827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-5122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-5122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-5122/.minikube/certs/ca-key.pem org=jenkins.addons-636193 san=[127.0.0.1 192.168.49.2 addons-636193 localhost minikube]
	I0719 03:27:16.811402   13827 provision.go:177] copyRemoteCerts
	I0719 03:27:16.811460   13827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 03:27:16.811500   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:16.828398   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:16.910510   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 03:27:16.930447   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 03:27:16.949986   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 03:27:16.969359   13827 provision.go:87] duration metric: took 446.897416ms to configureAuth
	I0719 03:27:16.969385   13827 ubuntu.go:193] setting minikube options for container-runtime
	I0719 03:27:16.969523   13827 config.go:182] Loaded profile config "addons-636193": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 03:27:16.969533   13827 machine.go:97] duration metric: took 887.121861ms to provisionDockerMachine
	I0719 03:27:16.969540   13827 client.go:171] duration metric: took 10.472794164s to LocalClient.Create
	I0719 03:27:16.969556   13827 start.go:167] duration metric: took 10.472848796s to libmachine.API.Create "addons-636193"
	I0719 03:27:16.969566   13827 start.go:293] postStartSetup for "addons-636193" (driver="docker")
	I0719 03:27:16.969574   13827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 03:27:16.969610   13827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 03:27:16.969646   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:16.986635   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:17.070667   13827 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 03:27:17.073439   13827 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0719 03:27:17.073466   13827 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0719 03:27:17.073476   13827 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0719 03:27:17.073482   13827 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0719 03:27:17.073491   13827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-5122/.minikube/addons for local assets ...
	I0719 03:27:17.073543   13827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-5122/.minikube/files for local assets ...
	I0719 03:27:17.073564   13827 start.go:296] duration metric: took 103.993289ms for postStartSetup
	I0719 03:27:17.073796   13827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-636193
	I0719 03:27:17.090918   13827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/config.json ...
	I0719 03:27:17.091238   13827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 03:27:17.091284   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:17.107824   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:17.187031   13827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0719 03:27:17.190788   13827 start.go:128] duration metric: took 10.696433182s to createHost
	I0719 03:27:17.190810   13827 start.go:83] releasing machines lock for "addons-636193", held for 10.696608131s
	I0719 03:27:17.190870   13827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-636193
	I0719 03:27:17.207095   13827 ssh_runner.go:195] Run: cat /version.json
	I0719 03:27:17.207141   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:17.207189   13827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 03:27:17.207252   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:17.224198   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:17.225348   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:17.370310   13827 ssh_runner.go:195] Run: systemctl --version
	I0719 03:27:17.374294   13827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 03:27:17.377863   13827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0719 03:27:17.398767   13827 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0719 03:27:17.398842   13827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 03:27:17.422231   13827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0719 03:27:17.422264   13827 start.go:495] detecting cgroup driver to use...
	I0719 03:27:17.422296   13827 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 03:27:17.422332   13827 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 03:27:17.432467   13827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 03:27:17.441653   13827 docker.go:217] disabling cri-docker service (if available) ...
	I0719 03:27:17.441701   13827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 03:27:17.452943   13827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 03:27:17.465248   13827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 03:27:17.543699   13827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 03:27:17.618301   13827 docker.go:233] disabling docker service ...
	I0719 03:27:17.618363   13827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 03:27:17.634184   13827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 03:27:17.643938   13827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 03:27:17.722658   13827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 03:27:17.796281   13827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 03:27:17.805765   13827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:27:17.819305   13827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 03:27:17.827155   13827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 03:27:17.834760   13827 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 03:27:17.834799   13827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 03:27:17.842414   13827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:27:17.850059   13827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 03:27:17.857620   13827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:27:17.865235   13827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 03:27:17.872511   13827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 03:27:17.880271   13827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 03:27:17.888457   13827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 03:27:17.897038   13827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 03:27:17.904125   13827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 03:27:17.911282   13827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:27:17.990697   13827 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 03:27:18.094425   13827 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0719 03:27:18.094493   13827 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0719 03:27:18.097947   13827 start.go:563] Will wait 60s for crictl version
	I0719 03:27:18.097988   13827 ssh_runner.go:195] Run: which crictl
	I0719 03:27:18.100951   13827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 03:27:18.133225   13827 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0719 03:27:18.133299   13827 ssh_runner.go:195] Run: containerd --version
	I0719 03:27:18.156767   13827 ssh_runner.go:195] Run: containerd --version
	I0719 03:27:18.182747   13827 out.go:177] * Preparing Kubernetes v1.30.3 on containerd 1.7.19 ...
	I0719 03:27:18.184087   13827 cli_runner.go:164] Run: docker network inspect addons-636193 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0719 03:27:18.199704   13827 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0719 03:27:18.202906   13827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 03:27:18.212201   13827 kubeadm.go:883] updating cluster {Name:addons-636193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-636193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 03:27:18.212340   13827 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0719 03:27:18.212404   13827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 03:27:18.241604   13827 containerd.go:627] all images are preloaded for containerd runtime.
	I0719 03:27:18.241623   13827 containerd.go:534] Images already preloaded, skipping extraction
	I0719 03:27:18.241674   13827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 03:27:18.271384   13827 containerd.go:627] all images are preloaded for containerd runtime.
	I0719 03:27:18.271404   13827 cache_images.go:84] Images are preloaded, skipping loading
	I0719 03:27:18.271411   13827 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 containerd true true} ...
	I0719 03:27:18.271493   13827 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-636193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-636193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 03:27:18.271537   13827 ssh_runner.go:195] Run: sudo crictl info
	I0719 03:27:18.301540   13827 cni.go:84] Creating CNI manager for ""
	I0719 03:27:18.301561   13827 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0719 03:27:18.301575   13827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 03:27:18.301601   13827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-636193 NodeName:addons-636193 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 03:27:18.301724   13827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-636193"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 03:27:18.301774   13827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 03:27:18.309535   13827 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 03:27:18.309616   13827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 03:27:18.316801   13827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0719 03:27:18.331764   13827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 03:27:18.346573   13827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0719 03:27:18.361529   13827 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0719 03:27:18.364573   13827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 03:27:18.373761   13827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:27:18.442881   13827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 03:27:18.454184   13827 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193 for IP: 192.168.49.2
	I0719 03:27:18.454204   13827 certs.go:194] generating shared ca certs ...
	I0719 03:27:18.454224   13827 certs.go:226] acquiring lock for ca certs: {Name:mkd01ca1d41d005c3d0c79c428dfd7216b071be1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:18.454341   13827 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-5122/.minikube/ca.key
	I0719 03:27:18.656858   13827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-5122/.minikube/ca.crt ...
	I0719 03:27:18.656885   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/ca.crt: {Name:mk17bcd5a59dca779616aa7bd0241218a3ec5fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:18.657070   13827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-5122/.minikube/ca.key ...
	I0719 03:27:18.657085   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/ca.key: {Name:mkd79c659b3a1caf877cc77f5212f4f094e74c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:18.657175   13827 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-5122/.minikube/proxy-client-ca.key
	I0719 03:27:18.764104   13827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-5122/.minikube/proxy-client-ca.crt ...
	I0719 03:27:18.764139   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/proxy-client-ca.crt: {Name:mk6f7b9dfd62fd0d73b41f12abb5a7ddabadbfe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:18.764306   13827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-5122/.minikube/proxy-client-ca.key ...
	I0719 03:27:18.764316   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/proxy-client-ca.key: {Name:mk4d81726b57187efaf1a1e6c1cbd0dc518828f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:18.764381   13827 certs.go:256] generating profile certs ...
	I0719 03:27:18.764432   13827 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.key
	I0719 03:27:18.764445   13827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt with IP's: []
	I0719 03:27:18.992130   13827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt ...
	I0719 03:27:18.992155   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: {Name:mk5dcdaf4596905b753bf95fe1f1b9c9abd17a30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:18.992306   13827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.key ...
	I0719 03:27:18.992316   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.key: {Name:mk95ba5ef9529e3912432d5573d81d506fd161be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:18.992379   13827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.key.dbd6364c
	I0719 03:27:18.992396   13827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.crt.dbd6364c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0719 03:27:19.297707   13827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.crt.dbd6364c ...
	I0719 03:27:19.297732   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.crt.dbd6364c: {Name:mk87f7ce877ad66927d03439854c68e6aeb214ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:19.297901   13827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.key.dbd6364c ...
	I0719 03:27:19.297916   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.key.dbd6364c: {Name:mk623973fde84a06215c43e3b317752e89637cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:19.298008   13827 certs.go:381] copying /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.crt.dbd6364c -> /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.crt
	I0719 03:27:19.298078   13827 certs.go:385] copying /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.key.dbd6364c -> /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.key
	I0719 03:27:19.298122   13827 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/proxy-client.key
	I0719 03:27:19.298138   13827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/proxy-client.crt with IP's: []
	I0719 03:27:19.606031   13827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/proxy-client.crt ...
	I0719 03:27:19.606060   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/proxy-client.crt: {Name:mk4e9cd56bff8421f0ec36c5cdde16a8b63880f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:19.606225   13827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/proxy-client.key ...
	I0719 03:27:19.606237   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/proxy-client.key: {Name:mkff72377b499c0a591bf4dbeb958c459ff116dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:19.606415   13827 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-5122/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 03:27:19.606448   13827 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-5122/.minikube/certs/ca.pem (1082 bytes)
	I0719 03:27:19.606469   13827 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-5122/.minikube/certs/cert.pem (1123 bytes)
	I0719 03:27:19.606492   13827 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-5122/.minikube/certs/key.pem (1675 bytes)
	I0719 03:27:19.607057   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 03:27:19.628320   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 03:27:19.648510   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 03:27:19.668341   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 03:27:19.689838   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 03:27:19.710395   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 03:27:19.730924   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 03:27:19.751080   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 03:27:19.770421   13827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-5122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 03:27:19.790153   13827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 03:27:19.804553   13827 ssh_runner.go:195] Run: openssl version
	I0719 03:27:19.809377   13827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 03:27:19.817391   13827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:27:19.820369   13827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:27 /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:27:19.820422   13827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:27:19.826336   13827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 03:27:19.834282   13827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 03:27:19.836956   13827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 03:27:19.837004   13827 kubeadm.go:392] StartCluster: {Name:addons-636193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-636193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:27:19.837076   13827 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0719 03:27:19.837122   13827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 03:27:19.867132   13827 cri.go:89] found id: ""
	I0719 03:27:19.867195   13827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 03:27:19.875275   13827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 03:27:19.883302   13827 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0719 03:27:19.883361   13827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 03:27:19.890989   13827 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 03:27:19.891007   13827 kubeadm.go:157] found existing configuration files:
	
	I0719 03:27:19.891039   13827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 03:27:19.898027   13827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 03:27:19.898079   13827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 03:27:19.904675   13827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 03:27:19.911346   13827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 03:27:19.911381   13827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 03:27:19.918076   13827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 03:27:19.925173   13827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 03:27:19.925235   13827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 03:27:19.932393   13827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 03:27:19.939646   13827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 03:27:19.939690   13827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 03:27:19.946873   13827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0719 03:27:19.987466   13827 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 03:27:19.987525   13827 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 03:27:20.020526   13827 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0719 03:27:20.020586   13827 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1062-gcp
	I0719 03:27:20.020619   13827 kubeadm.go:310] OS: Linux
	I0719 03:27:20.020673   13827 kubeadm.go:310] CGROUPS_CPU: enabled
	I0719 03:27:20.020731   13827 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0719 03:27:20.020788   13827 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0719 03:27:20.020868   13827 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0719 03:27:20.020961   13827 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0719 03:27:20.021040   13827 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0719 03:27:20.021108   13827 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0719 03:27:20.021177   13827 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0719 03:27:20.021236   13827 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0719 03:27:20.072622   13827 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 03:27:20.072730   13827 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 03:27:20.072811   13827 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 03:27:20.254201   13827 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 03:27:20.256520   13827 out.go:204]   - Generating certificates and keys ...
	I0719 03:27:20.256626   13827 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 03:27:20.256725   13827 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 03:27:20.455719   13827 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 03:27:20.686046   13827 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 03:27:20.922491   13827 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 03:27:21.170233   13827 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 03:27:21.274319   13827 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 03:27:21.274456   13827 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-636193 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0719 03:27:21.400198   13827 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 03:27:21.400341   13827 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-636193 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0719 03:27:21.632466   13827 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 03:27:21.983700   13827 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 03:27:22.199099   13827 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 03:27:22.199167   13827 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 03:27:22.280830   13827 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 03:27:22.555825   13827 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 03:27:22.780433   13827 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 03:27:22.912137   13827 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 03:27:23.089143   13827 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 03:27:23.089676   13827 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 03:27:23.091951   13827 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 03:27:23.094358   13827 out.go:204]   - Booting up control plane ...
	I0719 03:27:23.094466   13827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 03:27:23.094562   13827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 03:27:23.094686   13827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 03:27:23.102860   13827 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 03:27:23.104641   13827 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 03:27:23.104690   13827 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 03:27:23.180588   13827 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 03:27:23.180674   13827 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 03:27:23.682586   13827 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.711112ms
	I0719 03:27:23.682777   13827 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 03:27:28.183942   13827 kubeadm.go:310] [api-check] The API server is healthy after 4.501634713s
	I0719 03:27:28.194739   13827 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 03:27:28.206039   13827 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 03:27:28.223104   13827 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 03:27:28.223378   13827 kubeadm.go:310] [mark-control-plane] Marking the node addons-636193 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 03:27:28.229533   13827 kubeadm.go:310] [bootstrap-token] Using token: kvssa6.78g24514th2nurt9
	I0719 03:27:28.230954   13827 out.go:204]   - Configuring RBAC rules ...
	I0719 03:27:28.231085   13827 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 03:27:28.233518   13827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 03:27:28.240133   13827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 03:27:28.242359   13827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 03:27:28.244481   13827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 03:27:28.246507   13827 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 03:27:28.589504   13827 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 03:27:29.007169   13827 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 03:27:29.590486   13827 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 03:27:29.591349   13827 kubeadm.go:310] 
	I0719 03:27:29.591428   13827 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 03:27:29.591455   13827 kubeadm.go:310] 
	I0719 03:27:29.591570   13827 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 03:27:29.591582   13827 kubeadm.go:310] 
	I0719 03:27:29.591622   13827 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 03:27:29.591692   13827 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 03:27:29.591764   13827 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 03:27:29.591774   13827 kubeadm.go:310] 
	I0719 03:27:29.591840   13827 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 03:27:29.591850   13827 kubeadm.go:310] 
	I0719 03:27:29.591917   13827 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 03:27:29.591927   13827 kubeadm.go:310] 
	I0719 03:27:29.591995   13827 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 03:27:29.592118   13827 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 03:27:29.592214   13827 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 03:27:29.592222   13827 kubeadm.go:310] 
	I0719 03:27:29.592334   13827 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 03:27:29.592446   13827 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 03:27:29.592460   13827 kubeadm.go:310] 
	I0719 03:27:29.592584   13827 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kvssa6.78g24514th2nurt9 \
	I0719 03:27:29.592735   13827 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:488c2a8c7e0845e347a24e991c4505f20624246ad88f3b15df196b183bd32fbd \
	I0719 03:27:29.592766   13827 kubeadm.go:310] 	--control-plane 
	I0719 03:27:29.592775   13827 kubeadm.go:310] 
	I0719 03:27:29.592890   13827 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 03:27:29.592905   13827 kubeadm.go:310] 
	I0719 03:27:29.593018   13827 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kvssa6.78g24514th2nurt9 \
	I0719 03:27:29.593157   13827 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:488c2a8c7e0845e347a24e991c4505f20624246ad88f3b15df196b183bd32fbd 
	I0719 03:27:29.594522   13827 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1062-gcp\n", err: exit status 1
	I0719 03:27:29.594693   13827 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 03:27:29.594725   13827 cni.go:84] Creating CNI manager for ""
	I0719 03:27:29.594738   13827 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0719 03:27:29.596310   13827 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 03:27:29.597546   13827 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 03:27:29.600988   13827 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 03:27:29.601004   13827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 03:27:29.616766   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 03:27:29.797620   13827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 03:27:29.797726   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:29.797748   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-636193 minikube.k8s.io/updated_at=2024_07_19T03_27_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=addons-636193 minikube.k8s.io/primary=true
	I0719 03:27:29.805608   13827 ops.go:34] apiserver oom_adj: -16
	I0719 03:27:29.878945   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:30.379708   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:30.879262   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:31.379810   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:31.879360   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:32.379239   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:32.879848   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:33.379827   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:33.879642   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:34.379821   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:34.879365   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:35.379873   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:35.879737   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:36.380045   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:36.879728   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:37.379572   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:37.879421   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:38.379183   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:38.879968   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:39.379015   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:39.879967   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:40.379780   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:40.879853   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:41.379430   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:41.879842   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:42.379897   13827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:27:42.442670   13827 kubeadm.go:1113] duration metric: took 12.644971768s to wait for elevateKubeSystemPrivileges
	I0719 03:27:42.442709   13827 kubeadm.go:394] duration metric: took 22.60570982s to StartCluster
	I0719 03:27:42.442727   13827 settings.go:142] acquiring lock: {Name:mk76949f1657a382e69896c12cc81b014ac122cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:42.442825   13827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-5122/kubeconfig
	I0719 03:27:42.443187   13827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/kubeconfig: {Name:mk3183fae086d96a3d75d5333639366fcf995579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:42.443386   13827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 03:27:42.443422   13827 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0719 03:27:42.443484   13827 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0719 03:27:42.443577   13827 addons.go:69] Setting yakd=true in profile "addons-636193"
	I0719 03:27:42.443607   13827 addons.go:234] Setting addon yakd=true in "addons-636193"
	I0719 03:27:42.443641   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.443663   13827 config.go:182] Loaded profile config "addons-636193": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 03:27:42.443706   13827 addons.go:69] Setting inspektor-gadget=true in profile "addons-636193"
	I0719 03:27:42.443727   13827 addons.go:234] Setting addon inspektor-gadget=true in "addons-636193"
	I0719 03:27:42.443748   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.444144   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.444153   13827 addons.go:69] Setting metrics-server=true in profile "addons-636193"
	I0719 03:27:42.444174   13827 addons.go:234] Setting addon metrics-server=true in "addons-636193"
	I0719 03:27:42.444193   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.444218   13827 addons.go:69] Setting storage-provisioner=true in profile "addons-636193"
	I0719 03:27:42.444253   13827 addons.go:234] Setting addon storage-provisioner=true in "addons-636193"
	I0719 03:27:42.444285   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.444612   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.444687   13827 addons.go:69] Setting cloud-spanner=true in profile "addons-636193"
	I0719 03:27:42.444701   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.444724   13827 addons.go:234] Setting addon cloud-spanner=true in "addons-636193"
	I0719 03:27:42.444757   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.444767   13827 addons.go:69] Setting ingress=true in profile "addons-636193"
	I0719 03:27:42.444803   13827 addons.go:234] Setting addon ingress=true in "addons-636193"
	I0719 03:27:42.444799   13827 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-636193"
	I0719 03:27:42.444825   13827 addons.go:69] Setting volcano=true in profile "addons-636193"
	I0719 03:27:42.444842   13827 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-636193"
	I0719 03:27:42.444859   13827 addons.go:234] Setting addon volcano=true in "addons-636193"
	I0719 03:27:42.444879   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.444891   13827 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-636193"
	I0719 03:27:42.444901   13827 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-636193"
	I0719 03:27:42.444931   13827 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-636193"
	I0719 03:27:42.444942   13827 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-636193"
	I0719 03:27:42.444960   13827 addons.go:69] Setting registry=true in profile "addons-636193"
	I0719 03:27:42.444970   13827 addons.go:69] Setting default-storageclass=true in profile "addons-636193"
	I0719 03:27:42.444987   13827 addons.go:234] Setting addon registry=true in "addons-636193"
	I0719 03:27:42.444992   13827 addons.go:69] Setting gcp-auth=true in profile "addons-636193"
	I0719 03:27:42.445006   13827 mustload.go:65] Loading cluster: addons-636193
	I0719 03:27:42.445011   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.445186   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.445195   13827 addons.go:69] Setting helm-tiller=true in profile "addons-636193"
	I0719 03:27:42.445217   13827 addons.go:234] Setting addon helm-tiller=true in "addons-636193"
	I0719 03:27:42.445232   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.445241   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.445331   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.445414   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.445500   13827 addons.go:69] Setting volumesnapshots=true in profile "addons-636193"
	I0719 03:27:42.445529   13827 addons.go:234] Setting addon volumesnapshots=true in "addons-636193"
	I0719 03:27:42.445580   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.445668   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.446012   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.444965   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.444885   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.446713   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.446901   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.449927   13827 out.go:177] * Verifying Kubernetes components...
	I0719 03:27:42.444858   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.450199   13827 addons.go:69] Setting ingress-dns=true in profile "addons-636193"
	I0719 03:27:42.450280   13827 addons.go:234] Setting addon ingress-dns=true in "addons-636193"
	I0719 03:27:42.450348   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.450696   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.451376   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.444987   13827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-636193"
	I0719 03:27:42.453034   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.444144   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.445187   13827 config.go:182] Loaded profile config "addons-636193": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 03:27:42.457647   13827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:27:42.479073   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.489348   13827 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-636193"
	I0719 03:27:42.489398   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.489856   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.499756   13827 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0719 03:27:42.501522   13827 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0719 03:27:42.501543   13827 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0719 03:27:42.502758   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.517208   13827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 03:27:42.517338   13827 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0719 03:27:42.518564   13827 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 03:27:42.518598   13827 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 03:27:42.518681   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.518573   13827 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0719 03:27:42.518984   13827 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:27:42.518996   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 03:27:42.519035   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.523592   13827 out.go:177]   - Using image docker.io/registry:2.8.3
	I0719 03:27:42.523730   13827 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0719 03:27:42.524022   13827 addons.go:234] Setting addon default-storageclass=true in "addons-636193"
	I0719 03:27:42.524069   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.524555   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:42.525042   13827 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0719 03:27:42.525058   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0719 03:27:42.525164   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.526343   13827 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0719 03:27:42.526779   13827 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0719 03:27:42.526845   13827 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0719 03:27:42.528101   13827 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0719 03:27:42.528107   13827 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0719 03:27:42.528116   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0719 03:27:42.528130   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0719 03:27:42.528162   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.528176   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.528397   13827 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0719 03:27:42.529520   13827 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0719 03:27:42.530841   13827 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0719 03:27:42.531607   13827 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 03:27:42.532025   13827 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0719 03:27:42.532655   13827 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0719 03:27:42.532673   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0719 03:27:42.532767   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.533221   13827 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 03:27:42.533237   13827 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 03:27:42.533290   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.533411   13827 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0719 03:27:42.533552   13827 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 03:27:42.533567   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0719 03:27:42.533605   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.535611   13827 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0719 03:27:42.537322   13827 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0719 03:27:42.538931   13827 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0719 03:27:42.540591   13827 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0719 03:27:42.541815   13827 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0719 03:27:42.543183   13827 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0719 03:27:42.543207   13827 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0719 03:27:42.543275   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.547682   13827 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 03:27:42.548906   13827 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0719 03:27:42.549997   13827 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 03:27:42.551228   13827 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 03:27:42.551250   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0719 03:27:42.551315   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.558846   13827 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0719 03:27:42.559848   13827 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0719 03:27:42.559867   13827 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0719 03:27:42.559928   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.560088   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:42.570863   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.571823   13827 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0719 03:27:42.575739   13827 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0719 03:27:42.577184   13827 out.go:177]   - Using image docker.io/busybox:stable
	I0719 03:27:42.577884   13827 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 03:27:42.577907   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0719 03:27:42.577970   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.579998   13827 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 03:27:42.580025   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0719 03:27:42.580092   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.581925   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.606784   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.606856   13827 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 03:27:42.606871   13827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 03:27:42.606941   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:42.606790   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.606781   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.606781   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.615114   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.621037   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.621323   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.628702   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.631040   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.631708   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.633693   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.636204   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:42.639634   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	W0719 03:27:42.655372   13827 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0719 03:27:42.655413   13827 retry.go:31] will retry after 268.562648ms: ssh: handshake failed: EOF
	W0719 03:27:42.656716   13827 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0719 03:27:42.656748   13827 retry.go:31] will retry after 223.137152ms: ssh: handshake failed: EOF
	W0719 03:27:42.657030   13827 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0719 03:27:42.657049   13827 retry.go:31] will retry after 216.155184ms: ssh: handshake failed: EOF
	W0719 03:27:42.658129   13827 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0719 03:27:42.658156   13827 retry.go:31] will retry after 267.56305ms: ssh: handshake failed: EOF
	I0719 03:27:42.669340   13827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 03:27:42.669443   13827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 03:27:42.953332   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:27:42.975441   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0719 03:27:43.055513   13827 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 03:27:43.055598   13827 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 03:27:43.057285   13827 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0719 03:27:43.057353   13827 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0719 03:27:43.153689   13827 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0719 03:27:43.153720   13827 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0719 03:27:43.154559   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 03:27:43.167539   13827 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 03:27:43.167572   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 03:27:43.252616   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 03:27:43.254065   13827 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0719 03:27:43.254089   13827 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0719 03:27:43.353079   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0719 03:27:43.357759   13827 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0719 03:27:43.357789   13827 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0719 03:27:43.452604   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 03:27:43.459882   13827 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 03:27:43.459917   13827 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 03:27:43.460759   13827 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 03:27:43.460781   13827 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 03:27:43.463622   13827 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0719 03:27:43.463644   13827 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0719 03:27:43.469796   13827 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 03:27:43.469819   13827 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0719 03:27:43.552414   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 03:27:43.660188   13827 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0719 03:27:43.660233   13827 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0719 03:27:43.662824   13827 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 03:27:43.662843   13827 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 03:27:43.753382   13827 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0719 03:27:43.753412   13827 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0719 03:27:43.753725   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 03:27:43.754171   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 03:27:43.754915   13827 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 03:27:43.754935   13827 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 03:27:43.759430   13827 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0719 03:27:43.759453   13827 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0719 03:27:43.853423   13827 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0719 03:27:43.853473   13827 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0719 03:27:43.968766   13827 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0719 03:27:43.968795   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0719 03:27:43.976317   13827 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 03:27:43.976344   13827 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 03:27:44.153381   13827 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0719 03:27:44.153452   13827 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0719 03:27:44.158878   13827 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0719 03:27:44.158909   13827 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0719 03:27:44.357976   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0719 03:27:44.372055   13827 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.702588442s)
	I0719 03:27:44.372987   13827 node_ready.go:35] waiting up to 6m0s for node "addons-636193" to be "Ready" ...
	I0719 03:27:44.373200   13827 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.703833815s)
	I0719 03:27:44.373224   13827 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0719 03:27:44.454679   13827 node_ready.go:49] node "addons-636193" has status "Ready":"True"
	I0719 03:27:44.454760   13827 node_ready.go:38] duration metric: took 81.741914ms for node "addons-636193" to be "Ready" ...
	I0719 03:27:44.454784   13827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 03:27:44.456475   13827 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0719 03:27:44.456544   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0719 03:27:44.467860   13827 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5prnf" in "kube-system" namespace to be "Ready" ...
	I0719 03:27:44.555256   13827 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 03:27:44.555283   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 03:27:44.661257   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0719 03:27:44.756778   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 03:27:44.773965   13827 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0719 03:27:44.774001   13827 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0719 03:27:44.951215   13827 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0719 03:27:44.951245   13827 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0719 03:27:44.956504   13827 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-636193" context rescaled to 1 replicas
	I0719 03:27:45.167109   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 03:27:45.370068   13827 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0719 03:27:45.370099   13827 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0719 03:27:45.471164   13827 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-5prnf" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-5prnf" not found
	I0719 03:27:45.471205   13827 pod_ready.go:81] duration metric: took 1.003309253s for pod "coredns-7db6d8ff4d-5prnf" in "kube-system" namespace to be "Ready" ...
	E0719 03:27:45.471220   13827 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-5prnf" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-5prnf" not found
	I0719 03:27:45.471228   13827 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace to be "Ready" ...
	I0719 03:27:45.556491   13827 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0719 03:27:45.556519   13827 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0719 03:27:45.773661   13827 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 03:27:45.773688   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0719 03:27:46.151431   13827 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0719 03:27:46.151517   13827 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0719 03:27:46.451499   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.498127448s)
	I0719 03:27:46.451944   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.476472791s)
	I0719 03:27:46.457534   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 03:27:46.963729   13827 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0719 03:27:46.963827   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0719 03:27:47.453499   13827 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0719 03:27:47.453529   13827 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0719 03:27:47.563718   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:27:47.863293   13827 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0719 03:27:47.863384   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0719 03:27:48.154803   13827 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0719 03:27:48.154852   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0719 03:27:48.359745   13827 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 03:27:48.359769   13827 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0719 03:27:48.573753   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 03:27:49.770547   13827 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 03:27:49.770628   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:49.791626   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:50.060342   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:27:50.158691   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.004092736s)
	I0719 03:27:50.158731   13827 addons.go:475] Verifying addon ingress=true in "addons-636193"
	I0719 03:27:50.158999   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.906343554s)
	I0719 03:27:50.160122   13827 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 03:27:50.160739   13827 out.go:177] * Verifying ingress addon...
	I0719 03:27:50.163586   13827 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 03:27:50.168241   13827 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 03:27:50.168269   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:50.364777   13827 addons.go:234] Setting addon gcp-auth=true in "addons-636193"
	I0719 03:27:50.364906   13827 host.go:66] Checking if "addons-636193" exists ...
	I0719 03:27:50.365448   13827 cli_runner.go:164] Run: docker container inspect addons-636193 --format={{.State.Status}}
	I0719 03:27:50.386983   13827 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 03:27:50.387083   13827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-636193
	I0719 03:27:50.403112   13827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/addons-636193/id_rsa Username:docker}
	I0719 03:27:50.670296   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:51.167650   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:51.669053   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:52.055735   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.702614176s)
	I0719 03:27:52.055786   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.603097284s)
	I0719 03:27:52.055864   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.503359133s)
	I0719 03:27:52.055968   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.301738819s)
	I0719 03:27:52.056004   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.697989176s)
	I0719 03:27:52.056017   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.394673041s)
	I0719 03:27:52.056032   13827 addons.go:475] Verifying addon registry=true in "addons-636193"
	I0719 03:27:52.056183   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.29935919s)
	I0719 03:27:52.056199   13827 addons.go:475] Verifying addon metrics-server=true in "addons-636193"
	I0719 03:27:52.056221   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.3021522s)
	I0719 03:27:52.056324   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.889168952s)
	W0719 03:27:52.056376   13827 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 03:27:52.056405   13827 retry.go:31] will retry after 296.710772ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 03:27:52.056385   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.59878578s)
	I0719 03:27:52.075900   13827 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-636193 service yakd-dashboard -n yakd-dashboard
	
	I0719 03:27:52.075936   13827 out.go:177] * Verifying registry addon...
	I0719 03:27:52.079581   13827 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0719 03:27:52.080315   13827 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0719 03:27:52.083706   13827 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 03:27:52.083732   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:52.167523   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:52.354118   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 03:27:52.477539   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:27:52.584525   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:52.667539   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:53.056590   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.48276427s)
	I0719 03:27:53.056636   13827 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.669580434s)
	I0719 03:27:53.056688   13827 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-636193"
	I0719 03:27:53.058241   13827 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 03:27:53.058284   13827 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0719 03:27:53.060177   13827 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 03:27:53.061055   13827 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 03:27:53.063791   13827 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 03:27:53.063815   13827 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 03:27:53.069104   13827 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 03:27:53.069132   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:53.083906   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:53.156064   13827 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 03:27:53.156089   13827 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 03:27:53.168810   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:53.252766   13827 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 03:27:53.252791   13827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0719 03:27:53.276852   13827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 03:27:53.567724   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:53.584140   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:53.667919   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:53.979987   13827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.625816916s)
	I0719 03:27:54.067402   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:54.084363   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:54.177404   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:54.260962   13827 addons.go:475] Verifying addon gcp-auth=true in "addons-636193"
	I0719 03:27:54.263288   13827 out.go:177] * Verifying gcp-auth addon...
	I0719 03:27:54.266087   13827 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 03:27:54.268040   13827 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 03:27:54.566191   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:54.584894   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:54.668218   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:54.978205   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:27:55.067288   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:55.084651   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:55.168263   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:55.566184   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:55.584250   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:55.667662   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:56.067067   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:56.084449   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:56.167799   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:56.566991   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:56.584452   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:56.667524   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:57.066713   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:57.083772   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:57.168404   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:57.475909   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:27:57.566144   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:57.583992   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:57.667768   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:58.066412   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:58.084460   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:58.167938   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:58.567733   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:58.583405   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:58.667142   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:59.067179   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:59.084490   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:59.168922   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:27:59.477350   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:27:59.567816   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:27:59.587362   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:27:59.667770   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:00.066981   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:00.084378   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:00.167308   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:00.567081   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:00.583603   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:00.667496   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:01.066528   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:01.084520   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:01.167621   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:01.566034   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:01.583787   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:01.667881   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:01.976793   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:02.066338   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:02.084354   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:02.167339   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:02.566610   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:02.583542   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:02.667715   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:03.067156   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:03.084056   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:03.167113   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:03.566528   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:03.584421   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:03.667756   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:03.976906   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:04.066856   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:04.083353   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:04.168001   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:04.566595   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:04.584672   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:04.667843   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:05.067057   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:05.084200   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:05.167208   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:05.566233   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:05.583834   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:05.667875   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:06.065793   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:06.083916   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:06.168056   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:06.477056   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:06.566302   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:06.584009   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:06.667075   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:07.068668   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:07.083726   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:07.167980   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:07.566194   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:07.584023   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:07.667200   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:08.065891   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:08.083936   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:08.167846   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:08.566666   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:08.583787   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:08.669698   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:08.976077   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:09.066858   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:09.083713   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:09.168192   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:09.566502   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:09.584359   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:09.667866   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:10.066911   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:10.084069   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:10.167071   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:10.566243   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:10.583984   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:10.668099   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:10.976770   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:11.066390   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:11.084067   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:11.166962   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:11.566131   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:11.583610   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:11.667618   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:12.066256   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:12.084147   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:12.167297   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:12.566930   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:12.584040   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:12.666990   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:13.068103   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:13.083475   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:13.169180   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:13.476882   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:13.566320   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:13.584086   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:13.666861   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:14.066263   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:14.083708   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:14.167059   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:14.566370   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:14.584649   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:14.668340   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:15.068312   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:15.086608   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:15.167270   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:15.566763   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:15.583819   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:15.667882   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:15.976745   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:16.066129   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:16.084214   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:16.166766   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:16.566726   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:16.583336   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:16.667367   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:17.066331   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:17.084220   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:17.167403   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:17.566629   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:17.583963   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:17.667102   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:18.066076   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:18.083790   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:18.167927   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:18.476115   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:18.566293   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:18.583903   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:18.667988   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:19.066813   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:19.083568   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:19.167838   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:19.566551   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:19.583647   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:19.667826   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:20.066627   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:20.083628   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:20.167509   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:20.565736   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:20.583035   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:20.667260   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:20.976844   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:21.066099   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:21.084729   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:21.167629   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:21.566755   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:21.583322   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:21.667384   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:22.066834   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:22.083768   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:22.167956   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:22.565599   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:22.582869   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:22.667890   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:23.066574   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:23.083335   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:23.167452   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:23.476718   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:23.566148   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:23.583578   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:23.667401   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:24.066002   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:24.083766   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:24.167049   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:24.566676   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:24.583461   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:24.667336   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:25.066038   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:25.083779   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:25.168052   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:25.479147   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:25.566497   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:25.584181   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:25.667338   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:26.066371   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:26.084017   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:26.166736   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:26.566160   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:26.584325   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:26.667029   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:27.066155   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:27.083927   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:27.167889   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:27.566399   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:27.584094   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:27.667154   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:27.976460   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:28.066579   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:28.083269   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:28.167288   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:28.568111   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:28.583503   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:28.667348   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:29.067959   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:29.084317   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:29.167932   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:29.566182   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:29.583962   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:29.667913   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:30.066320   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:30.084078   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:30.167269   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:30.476451   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:30.566377   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:30.583141   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:30.667101   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:31.067169   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:31.083805   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:31.167627   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:31.566539   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:31.584407   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:31.668093   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:32.067492   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:32.084062   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:32.167990   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:32.476842   13827 pod_ready.go:102] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"False"
	I0719 03:28:32.567102   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:32.583969   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:32.668607   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:33.067461   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:33.084777   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:33.167412   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:33.476667   13827 pod_ready.go:92] pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace has status "Ready":"True"
	I0719 03:28:33.476693   13827 pod_ready.go:81] duration metric: took 48.00545549s for pod "coredns-7db6d8ff4d-9szqf" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.476705   13827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-636193" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.480990   13827 pod_ready.go:92] pod "etcd-addons-636193" in "kube-system" namespace has status "Ready":"True"
	I0719 03:28:33.481010   13827 pod_ready.go:81] duration metric: took 4.297678ms for pod "etcd-addons-636193" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.481025   13827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-636193" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.485309   13827 pod_ready.go:92] pod "kube-apiserver-addons-636193" in "kube-system" namespace has status "Ready":"True"
	I0719 03:28:33.485329   13827 pod_ready.go:81] duration metric: took 4.298134ms for pod "kube-apiserver-addons-636193" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.485340   13827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-636193" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.489475   13827 pod_ready.go:92] pod "kube-controller-manager-addons-636193" in "kube-system" namespace has status "Ready":"True"
	I0719 03:28:33.489495   13827 pod_ready.go:81] duration metric: took 4.148081ms for pod "kube-controller-manager-addons-636193" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.489504   13827 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-twtf5" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.493279   13827 pod_ready.go:92] pod "kube-proxy-twtf5" in "kube-system" namespace has status "Ready":"True"
	I0719 03:28:33.493299   13827 pod_ready.go:81] duration metric: took 3.787848ms for pod "kube-proxy-twtf5" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.493311   13827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-636193" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.585079   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:33.587498   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:33.668135   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:33.875128   13827 pod_ready.go:92] pod "kube-scheduler-addons-636193" in "kube-system" namespace has status "Ready":"True"
	I0719 03:28:33.875152   13827 pod_ready.go:81] duration metric: took 381.831457ms for pod "kube-scheduler-addons-636193" in "kube-system" namespace to be "Ready" ...
	I0719 03:28:33.875162   13827 pod_ready.go:38] duration metric: took 49.420331073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 03:28:33.875183   13827 api_server.go:52] waiting for apiserver process to appear ...
	I0719 03:28:33.875233   13827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 03:28:33.890864   13827 api_server.go:72] duration metric: took 51.44741133s to wait for apiserver process to appear ...
	I0719 03:28:33.890892   13827 api_server.go:88] waiting for apiserver healthz status ...
	I0719 03:28:33.890916   13827 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0719 03:28:33.894634   13827 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0719 03:28:33.895627   13827 api_server.go:141] control plane version: v1.30.3
	I0719 03:28:33.895653   13827 api_server.go:131] duration metric: took 4.756076ms to wait for apiserver health ...
	I0719 03:28:33.895661   13827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 03:28:34.066844   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:34.082491   13827 system_pods.go:59] 19 kube-system pods found
	I0719 03:28:34.082525   13827 system_pods.go:61] "coredns-7db6d8ff4d-9szqf" [ea7f0217-35ef-42cb-87d6-721de2168d5b] Running
	I0719 03:28:34.082538   13827 system_pods.go:61] "csi-hostpath-attacher-0" [93fdd6ce-22cb-49e9-a0b5-7d67e147c021] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 03:28:34.082547   13827 system_pods.go:61] "csi-hostpath-resizer-0" [8c4e301d-0fd6-4898-8906-1e5949aaabf2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0719 03:28:34.082559   13827 system_pods.go:61] "csi-hostpathplugin-8s68k" [33520f63-aaee-477e-a554-a0725c96467f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 03:28:34.082566   13827 system_pods.go:61] "etcd-addons-636193" [9eb1c149-1068-4682-9c54-232fdf7ea001] Running
	I0719 03:28:34.082572   13827 system_pods.go:61] "kindnet-hds9w" [e942f3dc-3650-41be-b229-64edf64842e8] Running
	I0719 03:28:34.082577   13827 system_pods.go:61] "kube-apiserver-addons-636193" [041d0f17-f409-474d-a6ea-f78cf63bc3b4] Running
	I0719 03:28:34.082589   13827 system_pods.go:61] "kube-controller-manager-addons-636193" [02be2bab-27c0-4b3c-98e0-533e8a49e795] Running
	I0719 03:28:34.082599   13827 system_pods.go:61] "kube-ingress-dns-minikube" [9f2181bd-6789-4912-b1be-f0ccf9fc8d8e] Running
	I0719 03:28:34.082603   13827 system_pods.go:61] "kube-proxy-twtf5" [fecb6c45-878d-4add-a033-77be069b6d08] Running
	I0719 03:28:34.082608   13827 system_pods.go:61] "kube-scheduler-addons-636193" [b64ff738-0014-4324-99ea-4b02a74763db] Running
	I0719 03:28:34.082613   13827 system_pods.go:61] "metrics-server-c59844bb4-7d8vw" [8725dfae-09aa-48d9-b58e-32c4ab8bd284] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 03:28:34.082622   13827 system_pods.go:61] "nvidia-device-plugin-daemonset-wt852" [7e9a607b-91f6-4f69-874a-07f2a9d578c8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0719 03:28:34.082631   13827 system_pods.go:61] "registry-656c9c8d9c-2crwh" [6b5c99b6-8fa3-406b-b438-81cf572e6546] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0719 03:28:34.082655   13827 system_pods.go:61] "registry-proxy-qbxh9" [eae5240d-9ec3-42ce-969d-004f729dcd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 03:28:34.082664   13827 system_pods.go:61] "snapshot-controller-745499f584-lq976" [70d46744-6572-4d65-bfcd-d5202aae280b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 03:28:34.082673   13827 system_pods.go:61] "snapshot-controller-745499f584-slltz" [aedef12a-b05b-4a96-8ae6-df083e301089] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 03:28:34.082683   13827 system_pods.go:61] "storage-provisioner" [80d3df23-fada-469e-baed-bfcc1c7dd5b0] Running
	I0719 03:28:34.082692   13827 system_pods.go:61] "tiller-deploy-6677d64bcd-tq89t" [4fbc4740-9ca6-4d39-951e-8659619897a4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0719 03:28:34.082702   13827 system_pods.go:74] duration metric: took 187.034741ms to wait for pod list to return data ...
	I0719 03:28:34.082715   13827 default_sa.go:34] waiting for default service account to be created ...
	I0719 03:28:34.084433   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:34.168233   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:34.274238   13827 default_sa.go:45] found service account: "default"
	I0719 03:28:34.274267   13827 default_sa.go:55] duration metric: took 191.541281ms for default service account to be created ...
	I0719 03:28:34.274278   13827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 03:28:34.480327   13827 system_pods.go:86] 19 kube-system pods found
	I0719 03:28:34.480353   13827 system_pods.go:89] "coredns-7db6d8ff4d-9szqf" [ea7f0217-35ef-42cb-87d6-721de2168d5b] Running
	I0719 03:28:34.480363   13827 system_pods.go:89] "csi-hostpath-attacher-0" [93fdd6ce-22cb-49e9-a0b5-7d67e147c021] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 03:28:34.480369   13827 system_pods.go:89] "csi-hostpath-resizer-0" [8c4e301d-0fd6-4898-8906-1e5949aaabf2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0719 03:28:34.480379   13827 system_pods.go:89] "csi-hostpathplugin-8s68k" [33520f63-aaee-477e-a554-a0725c96467f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 03:28:34.480385   13827 system_pods.go:89] "etcd-addons-636193" [9eb1c149-1068-4682-9c54-232fdf7ea001] Running
	I0719 03:28:34.480390   13827 system_pods.go:89] "kindnet-hds9w" [e942f3dc-3650-41be-b229-64edf64842e8] Running
	I0719 03:28:34.480396   13827 system_pods.go:89] "kube-apiserver-addons-636193" [041d0f17-f409-474d-a6ea-f78cf63bc3b4] Running
	I0719 03:28:34.480400   13827 system_pods.go:89] "kube-controller-manager-addons-636193" [02be2bab-27c0-4b3c-98e0-533e8a49e795] Running
	I0719 03:28:34.480409   13827 system_pods.go:89] "kube-ingress-dns-minikube" [9f2181bd-6789-4912-b1be-f0ccf9fc8d8e] Running
	I0719 03:28:34.480413   13827 system_pods.go:89] "kube-proxy-twtf5" [fecb6c45-878d-4add-a033-77be069b6d08] Running
	I0719 03:28:34.480420   13827 system_pods.go:89] "kube-scheduler-addons-636193" [b64ff738-0014-4324-99ea-4b02a74763db] Running
	I0719 03:28:34.480426   13827 system_pods.go:89] "metrics-server-c59844bb4-7d8vw" [8725dfae-09aa-48d9-b58e-32c4ab8bd284] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 03:28:34.480435   13827 system_pods.go:89] "nvidia-device-plugin-daemonset-wt852" [7e9a607b-91f6-4f69-874a-07f2a9d578c8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0719 03:28:34.480442   13827 system_pods.go:89] "registry-656c9c8d9c-2crwh" [6b5c99b6-8fa3-406b-b438-81cf572e6546] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0719 03:28:34.480450   13827 system_pods.go:89] "registry-proxy-qbxh9" [eae5240d-9ec3-42ce-969d-004f729dcd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 03:28:34.480456   13827 system_pods.go:89] "snapshot-controller-745499f584-lq976" [70d46744-6572-4d65-bfcd-d5202aae280b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 03:28:34.480469   13827 system_pods.go:89] "snapshot-controller-745499f584-slltz" [aedef12a-b05b-4a96-8ae6-df083e301089] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 03:28:34.480478   13827 system_pods.go:89] "storage-provisioner" [80d3df23-fada-469e-baed-bfcc1c7dd5b0] Running
	I0719 03:28:34.480487   13827 system_pods.go:89] "tiller-deploy-6677d64bcd-tq89t" [4fbc4740-9ca6-4d39-951e-8659619897a4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0719 03:28:34.480498   13827 system_pods.go:126] duration metric: took 206.214357ms to wait for k8s-apps to be running ...
	I0719 03:28:34.480508   13827 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 03:28:34.480549   13827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 03:28:34.494405   13827 system_svc.go:56] duration metric: took 13.888216ms WaitForService to wait for kubelet
	I0719 03:28:34.494446   13827 kubeadm.go:582] duration metric: took 52.050985261s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 03:28:34.494472   13827 node_conditions.go:102] verifying NodePressure condition ...
	I0719 03:28:34.566224   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:34.584368   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:34.667841   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:34.674830   13827 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0719 03:28:34.674858   13827 node_conditions.go:123] node cpu capacity is 8
	I0719 03:28:34.674876   13827 node_conditions.go:105] duration metric: took 180.396424ms to run NodePressure ...
	I0719 03:28:34.674891   13827 start.go:241] waiting for startup goroutines ...
	I0719 03:28:35.066524   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:35.083521   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:35.167452   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:35.566845   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:35.583607   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:35.667695   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:36.066330   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:36.084389   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:36.167585   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:36.565873   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:36.583753   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:36.667733   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:37.065978   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:37.083696   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:37.168225   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:37.566896   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:37.584131   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:37.667037   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:38.066424   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:38.082996   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:38.167961   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:38.567143   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:38.583330   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:38.667402   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:39.068649   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:39.084398   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:39.168271   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:39.566560   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:39.584251   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:39.668004   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:40.066949   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:40.084029   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:40.167772   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:40.565929   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:40.583280   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:40.667415   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:41.066211   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:41.084290   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:41.167225   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:41.566050   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:41.584023   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:41.668615   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:42.067375   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:42.084268   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:42.167273   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:42.566572   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:42.583549   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:42.667663   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:43.066855   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:43.083909   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:43.167811   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:43.566360   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:43.584017   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:43.668373   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:44.066953   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:44.084138   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:44.167427   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:44.566903   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:44.584187   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:44.668067   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:45.066398   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:45.085102   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:45.167824   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:45.567222   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:45.584140   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:45.667503   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:46.065608   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:46.083671   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:46.167697   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:46.566001   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:46.583694   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:46.667776   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:47.066079   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:47.084064   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:47.167071   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:47.567936   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:47.584734   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:47.668323   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:48.067276   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:48.084554   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:48.167876   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:48.566520   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:48.583900   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:48.667721   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:49.066774   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:49.083555   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:49.168003   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:49.566803   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:49.584262   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:49.667884   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:50.065963   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:50.084742   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:50.167964   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:50.567730   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:50.584256   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:50.668072   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:51.068742   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:51.084428   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:51.168103   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:51.567061   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:51.584614   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:51.668435   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:52.066058   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:52.084217   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:52.196105   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:52.579492   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:52.582872   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:52.699494   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:53.072041   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:53.083941   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:53.168414   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:53.566740   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:53.586085   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:53.667867   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:54.066598   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:54.085914   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:54.168337   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:54.567533   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:54.584723   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:54.667812   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:55.066444   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:55.082918   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:55.168058   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:55.566001   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:55.583955   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:55.667787   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:56.066187   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:56.084422   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:56.167357   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:56.566997   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:56.583112   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:56.666972   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:57.066069   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:57.083811   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:57.168052   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:57.567344   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:57.584219   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:57.668320   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:58.068353   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:58.084418   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:58.168286   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:58.566961   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:58.584001   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:58.667862   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:59.067257   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:59.084138   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:59.169846   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:28:59.567228   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:28:59.583949   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:28:59.668514   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:00.066058   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:00.084620   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:29:00.167620   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:00.566426   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:00.584882   13827 kapi.go:107] duration metric: took 1m8.505299592s to wait for kubernetes.io/minikube-addons=registry ...
	I0719 03:29:00.668181   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:01.067040   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:01.167510   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:01.566784   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:01.668084   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:02.066305   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:02.167706   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:02.565804   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:02.667727   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:03.066311   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:03.167927   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:03.567459   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:03.668301   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:04.066983   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:04.168745   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:04.653568   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:04.667056   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:05.068096   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:05.167798   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:05.568081   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:05.668043   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:06.066875   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:06.167398   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:06.567191   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:06.667282   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:07.066394   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:07.167212   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:07.567198   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:07.668925   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:08.066927   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:08.167483   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:08.565928   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:08.668021   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:09.066511   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:09.167266   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:09.566600   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:09.667474   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:10.066582   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:10.167935   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:10.566053   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:10.668280   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:11.066204   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:11.168277   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:11.566340   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:11.667741   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:12.065758   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:12.167362   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:12.566765   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:12.668639   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:13.066713   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:13.167429   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:13.566387   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:13.668534   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:14.066957   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:14.167747   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:14.566616   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:14.667658   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:15.066792   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:15.168169   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:15.567512   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:15.668323   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:16.066322   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:16.168503   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:16.567377   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:16.668490   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:17.066566   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:17.167629   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:17.570412   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:17.667705   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:18.067040   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:18.167574   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:18.567017   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:18.667926   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:19.066466   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:19.167637   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:19.567872   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:29:19.668353   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:20.066454   13827 kapi.go:107] duration metric: took 1m27.005394193s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 03:29:20.168136   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:20.666854   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:21.168099   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:21.667101   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:22.167217   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:22.667855   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:23.168257   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:23.667845   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:24.167511   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:24.667899   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:25.167569   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:25.667123   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:26.167234   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:26.667209   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:27.168003   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:27.668202   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:28.167513   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:28.667982   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:29.167193   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:29.667525   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:30.167738   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:30.667672   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:31.167902   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:31.667289   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:32.167910   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:32.666996   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:33.167635   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:33.667662   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:34.167139   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:34.666976   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:35.167215   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:35.667257   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:36.167630   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:36.667772   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:37.168676   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:37.667669   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:38.167336   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:38.667382   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:39.167736   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:39.668539   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:40.167835   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:40.667688   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:41.167888   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:41.666950   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:42.167241   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:42.667420   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:43.167996   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:43.667130   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:44.167839   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:44.666999   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:45.167418   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:45.667380   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:46.167390   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:46.667345   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:47.167703   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:47.667569   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:48.167922   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:48.667070   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:49.167413   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:49.667785   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:50.168068   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:50.667256   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:51.167610   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:51.667558   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:52.167770   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:52.667212   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:53.167172   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:53.667600   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:54.167859   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:54.667230   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:55.167382   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:55.667758   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:56.168039   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:56.667411   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:57.167406   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:57.667511   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:58.167828   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:58.667050   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:59.167352   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:29:59.667536   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:00.167070   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:00.667416   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:01.167464   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:01.667997   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:02.169728   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:02.667654   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:03.167607   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:03.668203   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:04.167700   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:04.667356   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:05.167945   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:05.667514   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:06.167586   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:06.668022   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:07.167307   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:07.667431   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:08.167734   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:08.667802   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:09.167286   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:09.667608   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:10.168143   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:10.667244   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:11.168089   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:11.667920   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:12.166991   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:12.667687   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:13.167833   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:13.667368   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:14.167748   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:14.667792   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:15.167162   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:15.667195   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:16.167329   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:16.667654   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:17.167965   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:17.667399   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:18.167492   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:18.668214   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:19.167619   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:19.667458   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:20.167807   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:20.667255   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:21.167541   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:21.667728   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:22.167206   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:22.667476   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:23.167817   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:23.668312   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:24.167711   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:24.667927   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:25.166920   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:25.667731   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:26.167749   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:26.667625   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:27.168014   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:27.667729   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:28.167838   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:28.667194   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:29.167596   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:29.667550   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:30.167497   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:30.667640   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:31.168022   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:31.668025   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:32.167389   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:32.668148   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:33.168520   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:33.667103   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:34.167515   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:34.667296   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:35.167027   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:35.667278   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:36.167080   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:36.668293   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:37.167061   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:37.667871   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:38.167349   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:38.269155   13827 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 03:30:38.269176   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:38.668005   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:38.768907   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:39.167324   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:39.269437   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:39.667615   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:39.769660   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:40.167200   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:40.269179   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:40.667715   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:40.769014   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:41.167363   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:41.269195   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:41.667826   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:41.768691   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:42.166872   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:42.268662   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:42.667306   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:42.769326   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:43.167694   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:43.269776   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:43.668017   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:43.768938   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:44.167865   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:44.268833   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:44.667460   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:44.769372   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:45.167689   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:45.269773   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:45.667188   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:45.769074   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:46.167355   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:46.269349   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:46.667742   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:46.768634   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:47.167967   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:47.269137   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:47.667515   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:47.769444   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:48.169972   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:48.269578   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:48.666937   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:48.768915   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:49.168188   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:49.268992   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:49.667081   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:49.769033   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:50.167465   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:50.269192   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:50.667765   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:50.770039   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:51.167796   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:51.268744   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:51.667020   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:51.769071   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:52.167318   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:52.269190   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:52.667455   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:52.769523   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:53.167191   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:53.269470   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:53.668108   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:53.769536   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:54.167356   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:54.269307   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:54.667626   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:54.769668   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:55.167251   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:55.269128   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:55.667934   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:55.769758   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:56.167202   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:56.269074   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:56.667697   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:56.769568   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:57.167482   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:57.269148   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:57.668823   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:57.769684   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:58.166847   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:58.269691   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:58.667250   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:58.769160   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:59.168255   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:59.269037   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:30:59.667310   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:30:59.769336   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:00.167907   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:00.269038   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:00.667622   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:00.769832   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:01.167142   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:01.269254   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:01.667638   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:01.769712   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:02.167196   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:02.269039   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:02.667820   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:02.768996   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:03.167754   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:03.269666   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:03.667327   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:03.769044   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:04.167573   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:04.269504   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:04.667968   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:04.769305   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:05.167842   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:05.269497   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:05.668016   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:05.769445   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:06.167949   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:06.269082   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:06.667227   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:06.769219   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:07.168139   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:07.269101   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:07.667521   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:07.769643   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:08.167197   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:08.269821   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:08.667316   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:08.769360   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:09.167007   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:09.269306   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:09.667720   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:09.768879   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:10.167671   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:10.270297   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:10.668062   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:10.769376   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:11.167751   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:11.269116   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:11.667759   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:11.769953   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:12.167225   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:12.269338   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:12.667772   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:12.768880   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:13.167189   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:13.269477   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:13.668172   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:13.769352   13827 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:31:14.167015   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:14.268817   13827 kapi.go:107] duration metric: took 3m20.00272971s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 03:31:14.270332   13827 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-636193 cluster.
	I0719 03:31:14.271537   13827 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 03:31:14.272678   13827 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 03:31:14.667074   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:15.167018   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:15.667300   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:16.167254   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:16.667466   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:17.168009   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:17.667263   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:18.167661   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:18.667751   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:19.166734   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:19.667949   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:20.167721   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:20.667800   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:21.167970   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:21.667277   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:22.167052   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:22.667424   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:23.167317   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:23.667691   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:24.167969   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:24.667040   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:25.166889   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:25.667805   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:26.168058   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:26.667276   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:27.167652   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:27.667346   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:28.167365   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:28.667786   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:29.167637   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:29.667961   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:30.168455   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:30.667465   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:31.167042   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:31.667329   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:32.167164   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:32.667365   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:33.166899   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:33.666990   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:34.167387   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:34.667585   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:35.167816   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:35.667975   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:36.168428   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:36.667252   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:37.167781   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:37.668166   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:38.167346   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:38.667501   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:39.167832   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:39.667176   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:40.167369   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:40.667973   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:41.167576   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:41.667747   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:42.167987   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:42.667521   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:43.167732   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:43.668027   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:44.167340   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:44.667072   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:45.167129   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:45.667423   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:46.167587   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:46.667772   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:47.167680   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:47.667976   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:48.167215   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:48.667044   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:49.167262   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:49.667221   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:50.167411   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:50.667211   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:51.167158   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:51.667197   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:52.167465   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:52.667463   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:53.167258   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:53.667191   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:54.167797   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:54.667506   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:55.167192   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:55.667473   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:56.167601   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:56.667734   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:57.167859   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:57.668149   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:58.167071   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:58.666961   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:59.167221   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:59.667449   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:00.167725   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:00.667721   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:01.167365   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:01.667437   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:02.167652   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:02.667663   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:03.167704   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:03.668214   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:04.167935   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:04.668034   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:05.167280   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:05.667912   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:06.167057   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:06.667330   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:07.167541   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:07.667767   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:08.168065   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:08.667166   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:09.167504   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:09.667644   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:10.168161   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:10.667613   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:11.167767   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:11.668362   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:12.166995   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:12.667385   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:13.167505   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:13.667947   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:14.167246   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:14.667168   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:15.167031   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:15.667061   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:16.167294   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:16.667510   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:17.167611   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:17.667667   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:18.167956   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:18.667102   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:19.167487   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:19.667793   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:20.167212   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:20.667330   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:21.167463   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:21.667307   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:22.167320   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:22.667727   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:23.167596   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:23.668127   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:24.167041   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:24.667110   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:25.167087   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:25.667137   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:26.167004   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:26.667502   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:27.167354   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:27.667167   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:28.167301   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:28.667459   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:29.167637   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:29.667301   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:30.167191   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:30.667681   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:31.167385   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:31.667065   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:32.167387   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:32.667385   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:33.167687   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:33.667665   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:34.167223   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:34.667354   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:35.167389   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:35.668022   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:36.168067   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:36.667186   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:37.167323   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:37.666692   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:38.167850   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:38.667146   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:39.167207   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:39.668431   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:40.167629   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:40.668724   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:41.168080   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:41.667171   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:42.167551   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:42.667980   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:43.167872   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:43.667197   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:44.167576   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:44.667661   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:45.168274   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:45.667127   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:46.167229   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:46.667317   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:47.167320   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:47.667276   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:48.167405   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:48.667429   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:49.167578   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:49.667670   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:50.167812   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:50.666884   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:51.167846   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:51.666943   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:52.167092   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:52.667189   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:53.167196   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:53.668189   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:54.167501   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:54.667685   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:55.168146   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:55.667456   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:56.167170   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:56.667303   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:57.167425   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:57.667993   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:58.167399   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:58.667919   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:59.167160   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:59.667387   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:00.167743   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:00.667946   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:01.167127   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:01.667148   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:02.167106   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:02.667677   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:03.168005   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:03.667477   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:04.167716   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:04.668104   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:05.167451   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:05.667782   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:06.168004   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:06.667533   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:07.167755   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:07.667817   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:08.167421   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:08.667891   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:09.168230   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:09.667325   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:10.167665   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:10.667956   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:11.167402   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:11.668038   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:12.167771   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:12.667050   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:13.167202   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:13.667391   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:14.167644   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:14.667700   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:15.167448   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:15.667756   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:16.168068   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:16.668316   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:17.167096   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:17.667257   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:18.166912   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:18.667560   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:19.167751   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:19.667618   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:20.167589   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:20.667558   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:21.167902   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:21.667181   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:22.168109   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:22.668180   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:23.168051   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:23.667015   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:24.167591   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:24.667869   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:25.166995   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:25.667046   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:26.166742   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:26.668033   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:27.167340   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:27.667091   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:28.167411   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:28.667635   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:29.168055   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:29.666897   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:30.166963   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:30.666968   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:31.167342   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:31.667224   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:32.166780   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:32.667347   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:33.167275   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:33.667120   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:34.167619   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:34.667616   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:35.167669   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:35.667906   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:36.167214   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:36.667667   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:37.167503   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:37.667736   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:38.168033   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:38.667202   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:39.167986   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:39.667981   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:40.167187   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:40.667370   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:41.167012   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:41.667020   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:42.166796   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:42.667035   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:43.167470   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:43.667182   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:44.167625   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:44.667551   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:45.167635   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:45.669551   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:46.167272   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:46.667384   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:47.167279   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:47.666968   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:48.167978   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:48.666988   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:49.167222   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:49.667235   13827 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:50.164152   13827 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=ingress-nginx" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0719 03:33:50.164181   13827 kapi.go:107] duration metric: took 6m0.000598447s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0719 03:33:50.164259   13827 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0719 03:33:50.166193   13827 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, nvidia-device-plugin, volcano, ingress-dns, metrics-server, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0719 03:33:50.167619   13827 addons.go:510] duration metric: took 6m7.724132692s for enable addons: enabled=[storage-provisioner cloud-spanner nvidia-device-plugin volcano ingress-dns metrics-server helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0719 03:33:50.167669   13827 start.go:246] waiting for cluster config update ...
	I0719 03:33:50.167694   13827 start.go:255] writing updated cluster config ...
	I0719 03:33:50.167957   13827 ssh_runner.go:195] Run: rm -f paused
	I0719 03:33:50.215927   13827 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 03:33:50.218030   13827 out.go:177] * Done! kubectl is now configured to use "addons-636193" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9b97cd03889b6       53dd31cf1afe4       About a minute ago   Running             headlamp                  0                   9793ec9391919       headlamp-7867546754-tlxrd
	7f12dc8efbd0d       db2fc13d44d50       4 minutes ago        Running             gcp-auth                  0                   d184dc8f29d75       gcp-auth-5db96cd9b4-kxppb
	6e012d2ea8187       684c5ea3b61b2       6 minutes ago        Exited              patch                     0                   f1451dc8b2a30       ingress-nginx-admission-patch-7xmt7
	d8d71ae2f5686       684c5ea3b61b2       6 minutes ago        Exited              create                    0                   7480fb2148daa       ingress-nginx-admission-create-d6xmp
	5dfbb3bd452b1       c7e3a3eeaf5ed       6 minutes ago        Running             yakd                      0                   1895bbbb4dbee       yakd-dashboard-799879c74f-rqsh5
	74fc1df18a066       e16d1e3a10667       6 minutes ago        Running             local-path-provisioner    0                   2ce24497d1212       local-path-provisioner-8d985888d-6ljcr
	fd33d7c483cc0       cbb01a7bd410d       7 minutes ago        Running             coredns                   0                   d06f8840057cb       coredns-7db6d8ff4d-9szqf
	af00ef3dbfe5f       30dd67412fdea       7 minutes ago        Running             minikube-ingress-dns      0                   6cf54d6e58031       kube-ingress-dns-minikube
	3bc69ac372312       6e38f40d628db       7 minutes ago        Running             storage-provisioner       0                   3d1c3ec6e6a64       storage-provisioner
	c583fe8a5b792       55bb025d2cfa5       7 minutes ago        Running             kube-proxy                0                   7d51d586569ee       kube-proxy-twtf5
	10e81b9277d1d       5cc3abe5717db       7 minutes ago        Running             kindnet-cni               0                   3196d2608e061       kindnet-hds9w
	54d7a52ab0348       3edc18e7b7672       8 minutes ago        Running             kube-scheduler            0                   63e76504c4136       kube-scheduler-addons-636193
	cfbc45907e0ba       3861cfcd7c04c       8 minutes ago        Running             etcd                      0                   6701bb8079355       etcd-addons-636193
	d8a65f62cffbf       76932a3b37d7e       8 minutes ago        Running             kube-controller-manager   0                   ef62ae990b913       kube-controller-manager-addons-636193
	6a4ce622a7fb7       1f6d574d502f3       8 minutes ago        Running             kube-apiserver            0                   ab06509d1ed7b       kube-apiserver-addons-636193
	
	
	==> containerd <==
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.355370088Z" level=info msg="RemovePodSandbox \"151b9bc475552766e713709a50c1afa87a3641e5fed5ebd4d13af9199cafe82b\" returns successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.355720871Z" level=info msg="StopPodSandbox for \"95f9249e6ff7871fdc7bfc3a97e89dabdfaa56b17a50d5f2ff20f45808d12963\""
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.362416084Z" level=info msg="TearDown network for sandbox \"95f9249e6ff7871fdc7bfc3a97e89dabdfaa56b17a50d5f2ff20f45808d12963\" successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.362438033Z" level=info msg="StopPodSandbox for \"95f9249e6ff7871fdc7bfc3a97e89dabdfaa56b17a50d5f2ff20f45808d12963\" returns successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.362802076Z" level=info msg="RemovePodSandbox for \"95f9249e6ff7871fdc7bfc3a97e89dabdfaa56b17a50d5f2ff20f45808d12963\""
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.362840012Z" level=info msg="Forcibly stopping sandbox \"95f9249e6ff7871fdc7bfc3a97e89dabdfaa56b17a50d5f2ff20f45808d12963\""
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.369242242Z" level=info msg="TearDown network for sandbox \"95f9249e6ff7871fdc7bfc3a97e89dabdfaa56b17a50d5f2ff20f45808d12963\" successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.373267161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95f9249e6ff7871fdc7bfc3a97e89dabdfaa56b17a50d5f2ff20f45808d12963\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.373321087Z" level=info msg="RemovePodSandbox \"95f9249e6ff7871fdc7bfc3a97e89dabdfaa56b17a50d5f2ff20f45808d12963\" returns successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.373646258Z" level=info msg="StopPodSandbox for \"0c657bfe59be143daab0203fb49bdfec5fb8a4fb88aa8b08efc59b37be1c591a\""
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.380720920Z" level=info msg="TearDown network for sandbox \"0c657bfe59be143daab0203fb49bdfec5fb8a4fb88aa8b08efc59b37be1c591a\" successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.380745570Z" level=info msg="StopPodSandbox for \"0c657bfe59be143daab0203fb49bdfec5fb8a4fb88aa8b08efc59b37be1c591a\" returns successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.381083978Z" level=info msg="RemovePodSandbox for \"0c657bfe59be143daab0203fb49bdfec5fb8a4fb88aa8b08efc59b37be1c591a\""
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.381113672Z" level=info msg="Forcibly stopping sandbox \"0c657bfe59be143daab0203fb49bdfec5fb8a4fb88aa8b08efc59b37be1c591a\""
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.387927667Z" level=info msg="TearDown network for sandbox \"0c657bfe59be143daab0203fb49bdfec5fb8a4fb88aa8b08efc59b37be1c591a\" successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.392446152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c657bfe59be143daab0203fb49bdfec5fb8a4fb88aa8b08efc59b37be1c591a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.392504528Z" level=info msg="RemovePodSandbox \"0c657bfe59be143daab0203fb49bdfec5fb8a4fb88aa8b08efc59b37be1c591a\" returns successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.392876894Z" level=info msg="StopPodSandbox for \"985cb9895dc8a3ece02cf9083e16395bacda58cf1d1991b6dee77b683751b212\""
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.399840934Z" level=info msg="TearDown network for sandbox \"985cb9895dc8a3ece02cf9083e16395bacda58cf1d1991b6dee77b683751b212\" successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.399865410Z" level=info msg="StopPodSandbox for \"985cb9895dc8a3ece02cf9083e16395bacda58cf1d1991b6dee77b683751b212\" returns successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.400216471Z" level=info msg="RemovePodSandbox for \"985cb9895dc8a3ece02cf9083e16395bacda58cf1d1991b6dee77b683751b212\""
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.400249034Z" level=info msg="Forcibly stopping sandbox \"985cb9895dc8a3ece02cf9083e16395bacda58cf1d1991b6dee77b683751b212\""
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.406827691Z" level=info msg="TearDown network for sandbox \"985cb9895dc8a3ece02cf9083e16395bacda58cf1d1991b6dee77b683751b212\" successfully"
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.410694547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"985cb9895dc8a3ece02cf9083e16395bacda58cf1d1991b6dee77b683751b212\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jul 19 03:35:29 addons-636193 containerd[852]: time="2024-07-19T03:35:29.410756193Z" level=info msg="RemovePodSandbox \"985cb9895dc8a3ece02cf9083e16395bacda58cf1d1991b6dee77b683751b212\" returns successfully"
	
	
	==> coredns [fd33d7c483cc09aaaf4b63ecf559bbf350e3da7c48715ae71478737b88736d6c] <==
	[INFO] 10.244.0.10:37660 - 22756 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086247s
	[INFO] 10.244.0.10:48285 - 1109 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003344536s
	[INFO] 10.244.0.10:48285 - 47273 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004307612s
	[INFO] 10.244.0.10:54914 - 13243 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00636047s
	[INFO] 10.244.0.10:54914 - 42678 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006465317s
	[INFO] 10.244.0.10:52155 - 22811 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003637113s
	[INFO] 10.244.0.10:52155 - 27928 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004100431s
	[INFO] 10.244.0.10:60466 - 28849 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000063473s
	[INFO] 10.244.0.10:60466 - 22414 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000098603s
	[INFO] 10.244.0.25:49002 - 54241 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000180543s
	[INFO] 10.244.0.25:41911 - 6944 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00025677s
	[INFO] 10.244.0.25:43205 - 4166 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138002s
	[INFO] 10.244.0.25:54053 - 1872 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161021s
	[INFO] 10.244.0.25:37682 - 31760 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096104s
	[INFO] 10.244.0.25:48149 - 53916 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110928s
	[INFO] 10.244.0.25:47923 - 60930 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.006943185s
	[INFO] 10.244.0.25:45273 - 57528 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008001918s
	[INFO] 10.244.0.25:46560 - 27493 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005765438s
	[INFO] 10.244.0.25:55632 - 59232 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005932387s
	[INFO] 10.244.0.25:43682 - 4124 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00455947s
	[INFO] 10.244.0.25:50129 - 7259 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004910716s
	[INFO] 10.244.0.25:56041 - 42942 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000841217s
	[INFO] 10.244.0.25:33085 - 54830 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000911208s
	[INFO] 10.244.0.28:55068 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000185234s
	[INFO] 10.244.0.28:60793 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126739s
	
	
	==> describe nodes <==
	Name:               addons-636193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-636193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=addons-636193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T03_27_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-636193
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:27:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-636193
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:35:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:34:37 +0000   Fri, 19 Jul 2024 03:27:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:34:37 +0000   Fri, 19 Jul 2024 03:27:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:34:37 +0000   Fri, 19 Jul 2024 03:27:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:34:37 +0000   Fri, 19 Jul 2024 03:27:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-636193
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859324Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859324Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a462e9e4d8e4b2e9d8b39a9a09c7afd
	  System UUID:                ea4baeaa-db30-4fa2-a639-11488dea1d1d
	  Boot ID:                    75a82d8d-380e-48dd-859f-df2616023254
	  Kernel Version:             5.15.0-1062-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.19
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-5db96cd9b4-kxppb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  headlamp                    headlamp-7867546754-tlxrd                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-9rsfl    100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m45s
	  kube-system                 coredns-7db6d8ff4d-9szqf                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     7m52s
	  kube-system                 etcd-addons-636193                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m6s
	  kube-system                 kindnet-hds9w                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m52s
	  kube-system                 kube-apiserver-addons-636193                 250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-controller-manager-addons-636193        200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m48s
	  kube-system                 kube-proxy-twtf5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m52s
	  kube-system                 kube-scheduler-addons-636193                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m48s
	  local-path-storage          local-path-provisioner-8d985888d-6ljcr       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	  yakd-dashboard              yakd-dashboard-799879c74f-rqsh5              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     7m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             438Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m49s  kube-proxy       
	  Normal  Starting                 8m6s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m6s   kubelet          Node addons-636193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m6s   kubelet          Node addons-636193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m6s   kubelet          Node addons-636193 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m53s  node-controller  Node addons-636193 event: Registered Node addons-636193 in Controller
	
	
	==> dmesg <==
	[  +0.003623] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001435] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002171] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.004132]  #5
	[  +0.000709]  #6
	[  +0.003685]  #7
	[  +0.059484] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.386147] i8042: Warning: Keylock active
	[  +0.007637] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003478] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000685] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000780] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000753] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000719] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000630] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000767] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000650] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.572073] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.047997] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005852] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.014363] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002609] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.014657] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.266734] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [cfbc45907e0baa9b35b149990b822ca2c815ab0bef15ea91de9bcb8fbd12ea8f] <==
	{"level":"info","ts":"2024-07-19T03:27:24.463325Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T03:27:24.893564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-19T03:27:24.893607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-19T03:27:24.893625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-07-19T03:27:24.893656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T03:27:24.893664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-19T03:27:24.893679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-19T03:27:24.893689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-19T03:27:24.894537Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T03:27:24.89511Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T03:27:24.895139Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T03:27:24.895107Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-636193 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T03:27:24.895351Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T03:27:24.89539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T03:27:24.895436Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T03:27:24.895504Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T03:27:24.895525Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T03:27:24.897178Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T03:27:24.897785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-19T03:28:52.532912Z","caller":"traceutil/trace.go:171","msg":"trace[634315006] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"116.387822ms","start":"2024-07-19T03:28:52.416507Z","end":"2024-07-19T03:28:52.532895Z","steps":["trace[634315006] 'process raft request'  (duration: 116.249135ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T03:28:52.576234Z","caller":"traceutil/trace.go:171","msg":"trace[1364403256] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"157.217093ms","start":"2024-07-19T03:28:52.418998Z","end":"2024-07-19T03:28:52.576215Z","steps":["trace[1364403256] 'process raft request'  (duration: 157.101594ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T03:28:52.936046Z","caller":"traceutil/trace.go:171","msg":"trace[1801977609] transaction","detail":"{read_only:false; response_revision:1197; number_of_response:1; }","duration":"118.92397ms","start":"2024-07-19T03:28:52.817105Z","end":"2024-07-19T03:28:52.936029Z","steps":["trace[1801977609] 'process raft request'  (duration: 118.825568ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T03:29:04.650618Z","caller":"traceutil/trace.go:171","msg":"trace[2067914838] transaction","detail":"{read_only:false; response_revision:1264; number_of_response:1; }","duration":"107.17345ms","start":"2024-07-19T03:29:04.543423Z","end":"2024-07-19T03:29:04.650596Z","steps":["trace[2067914838] 'process raft request'  (duration: 25.113653ms)","trace[2067914838] 'compare'  (duration: 81.917906ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T03:29:13.880203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.165832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:29:13.880276Z","caller":"traceutil/trace.go:171","msg":"trace[1986957619] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:0; response_revision:1318; }","duration":"112.277767ms","start":"2024-07-19T03:29:13.767986Z","end":"2024-07-19T03:29:13.880264Z","steps":["trace[1986957619] 'range keys from in-memory index tree'  (duration: 112.094152ms)"],"step_count":1}
	
	
	==> gcp-auth [7f12dc8efbd0d98ba32a29f4d7aedd0b71d53fba456c55b1cbec554478b88e69] <==
	2024/07/19 03:33:55 Ready to write response ...
	2024/07/19 03:34:01 Ready to marshal response ...
	2024/07/19 03:34:01 Ready to write response ...
	2024/07/19 03:34:01 Ready to marshal response ...
	2024/07/19 03:34:01 Ready to write response ...
	2024/07/19 03:34:05 Ready to marshal response ...
	2024/07/19 03:34:05 Ready to write response ...
	2024/07/19 03:34:05 Ready to marshal response ...
	2024/07/19 03:34:05 Ready to write response ...
	2024/07/19 03:34:13 Ready to marshal response ...
	2024/07/19 03:34:13 Ready to write response ...
	2024/07/19 03:34:13 Ready to marshal response ...
	2024/07/19 03:34:13 Ready to write response ...
	2024/07/19 03:34:20 Ready to marshal response ...
	2024/07/19 03:34:20 Ready to write response ...
	2024/07/19 03:34:20 Ready to marshal response ...
	2024/07/19 03:34:20 Ready to write response ...
	2024/07/19 03:34:20 Ready to marshal response ...
	2024/07/19 03:34:20 Ready to write response ...
	2024/07/19 03:34:24 Ready to marshal response ...
	2024/07/19 03:34:24 Ready to write response ...
	2024/07/19 03:34:44 Ready to marshal response ...
	2024/07/19 03:34:44 Ready to write response ...
	2024/07/19 03:34:57 Ready to marshal response ...
	2024/07/19 03:34:57 Ready to write response ...
	
	
	==> kernel <==
	 03:35:35 up 17 min,  0 users,  load average: 0.34, 0.38, 0.24
	Linux addons-636193 5.15.0-1062-gcp #70~20.04.1-Ubuntu SMP Fri May 24 20:12:18 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [10e81b9277d1dd0d3a6a0cbb867ed075c21f474cf9b692d3b76817b140e5a467] <==
	I0719 03:34:25.252433       1 main.go:303] handling current node
	W0719 03:34:27.750711       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0719 03:34:27.750748       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0719 03:34:35.252508       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 03:34:35.252543       1 main.go:303] handling current node
	W0719 03:34:42.639567       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 03:34:42.639606       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0719 03:34:45.252673       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 03:34:45.252713       1 main.go:303] handling current node
	I0719 03:34:55.252064       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 03:34:55.252101       1 main.go:303] handling current node
	W0719 03:34:57.725497       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0719 03:34:57.725534       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0719 03:35:05.252859       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 03:35:05.252889       1 main.go:303] handling current node
	I0719 03:35:15.252496       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 03:35:15.252529       1 main.go:303] handling current node
	W0719 03:35:20.569670       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0719 03:35:20.569702       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0719 03:35:20.626852       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 03:35:20.626890       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0719 03:35:25.252136       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 03:35:25.252166       1 main.go:303] handling current node
	W0719 03:35:34.124217       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0719 03:35:34.124250       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	
	
	==> kube-apiserver [6a4ce622a7fb7deed7de42fb2c6bc6d7a6db769d543e1cd4db4d7d3391a306f4] <==
	I0719 03:34:18.652209       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0719 03:34:18.666598       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0719 03:34:19.075405       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0719 03:34:19.151638       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0719 03:34:19.173830       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0719 03:34:19.173878       1 cacher.go:168] Terminating all watchers from cacher commands.bus.volcano.sh
	W0719 03:34:19.667100       1 cacher.go:168] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0719 03:34:19.671995       1 cacher.go:168] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0719 03:34:19.757924       1 cacher.go:168] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0719 03:34:19.952114       1 cacher.go:168] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0719 03:34:20.174725       1 cacher.go:168] Terminating all watchers from cacher jobflows.flow.volcano.sh
	I0719 03:34:20.495272       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.237.187"}
	W0719 03:34:20.589844       1 cacher.go:168] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0719 03:34:51.679429       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0719 03:35:13.865530       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 03:35:13.865582       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 03:35:13.878756       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 03:35:13.878793       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 03:35:13.893209       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 03:35:13.893245       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 03:35:13.904826       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 03:35:13.904871       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0719 03:35:14.879798       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0719 03:35:14.905701       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0719 03:35:14.955148       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [d8a65f62cffbfe6642730db2cb04e957d669e83e5b7342b4a6c3282cc1aa2b88] <==
	W0719 03:35:15.982831       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:15.982867       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 03:35:16.148228       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:16.148269       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 03:35:17.614108       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:17.614147       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 03:35:17.923267       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:17.923300       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 03:35:18.719649       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:18.719682       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 03:35:22.480160       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:22.480192       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 03:35:23.937501       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:23.937533       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 03:35:24.041848       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:24.041879       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 03:35:24.774338       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:24.774376       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0719 03:35:24.870350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="111.924µs"
	W0719 03:35:28.201077       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:28.201118       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 03:35:32.394986       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:32.395019       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 03:35:34.100190       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 03:35:34.100223       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [c583fe8a5b79204463aa52d6074d2652e399e373261ab2ed82b915c04cacf422] <==
	I0719 03:27:45.061269       1 server_linux.go:69] "Using iptables proxy"
	I0719 03:27:45.159450       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0719 03:27:45.654140       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0719 03:27:45.654191       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:27:45.657509       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0719 03:27:45.657537       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0719 03:27:45.657564       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:27:45.658274       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:27:45.658310       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:27:45.670726       1 config.go:319] "Starting node config controller"
	I0719 03:27:45.670775       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:27:45.670859       1 config.go:192] "Starting service config controller"
	I0719 03:27:45.670869       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:27:45.670916       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:27:45.670922       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:27:45.770940       1 shared_informer.go:320] Caches are synced for service config
	I0719 03:27:45.770993       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 03:27:45.774719       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [54d7a52ab0348f2bff11d02aa1273acc871ce9c89b860059ce594094985e569b] <==
	E0719 03:27:26.556574       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 03:27:26.556605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 03:27:26.556508       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 03:27:26.556797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 03:27:26.556825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 03:27:26.556835       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 03:27:26.557000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 03:27:26.557020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 03:27:26.557028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 03:27:26.557036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 03:27:27.410885       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 03:27:27.410916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 03:27:27.437808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 03:27:27.437846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 03:27:27.469988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 03:27:27.470028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 03:27:27.474944       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 03:27:27.474980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 03:27:27.534314       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 03:27:27.534346       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 03:27:27.544284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 03:27:27.544319       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 03:27:27.596326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 03:27:27.596363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0719 03:27:29.281843       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 03:35:08 addons-636193 kubelet[1702]: I0719 03:35:08.778021    1702 scope.go:117] "RemoveContainer" containerID="a9cd6542a9eceb6affa8d300cf9d5db6e92fb76274ac31586bfc52a4acc76b4c"
	Jul 19 03:35:08 addons-636193 kubelet[1702]: I0719 03:35:08.851355    1702 scope.go:117] "RemoveContainer" containerID="a9cd6542a9eceb6affa8d300cf9d5db6e92fb76274ac31586bfc52a4acc76b4c"
	Jul 19 03:35:08 addons-636193 kubelet[1702]: E0719 03:35:08.852360    1702 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9cd6542a9eceb6affa8d300cf9d5db6e92fb76274ac31586bfc52a4acc76b4c\": not found" containerID="a9cd6542a9eceb6affa8d300cf9d5db6e92fb76274ac31586bfc52a4acc76b4c"
	Jul 19 03:35:08 addons-636193 kubelet[1702]: I0719 03:35:08.852403    1702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9cd6542a9eceb6affa8d300cf9d5db6e92fb76274ac31586bfc52a4acc76b4c"} err="failed to get container status \"a9cd6542a9eceb6affa8d300cf9d5db6e92fb76274ac31586bfc52a4acc76b4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9cd6542a9eceb6affa8d300cf9d5db6e92fb76274ac31586bfc52a4acc76b4c\": not found"
	Jul 19 03:35:08 addons-636193 kubelet[1702]: I0719 03:35:08.859137    1702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33520f63-aaee-477e-a554-a0725c96467f" path="/var/lib/kubelet/pods/33520f63-aaee-477e-a554-a0725c96467f/volumes"
	Jul 19 03:35:08 addons-636193 kubelet[1702]: I0719 03:35:08.859723    1702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c4e301d-0fd6-4898-8906-1e5949aaabf2" path="/var/lib/kubelet/pods/8c4e301d-0fd6-4898-8906-1e5949aaabf2/volumes"
	Jul 19 03:35:08 addons-636193 kubelet[1702]: I0719 03:35:08.860065    1702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93fdd6ce-22cb-49e9-a0b5-7d67e147c021" path="/var/lib/kubelet/pods/93fdd6ce-22cb-49e9-a0b5-7d67e147c021/volumes"
	Jul 19 03:35:10 addons-636193 kubelet[1702]: E0719 03:35:10.857875    1702 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.11.1@sha256:e6439a12b52076965928e83b7b56aae6731231677b01e81818bce7fa5c60161a\\\"\"" pod="ingress-nginx/ingress-nginx-controller-6d9bd977d4-9rsfl" podUID="6fe47a7b-984e-4e00-9377-211a155d030e"
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.299422    1702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x7vx\" (UniqueName: \"kubernetes.io/projected/70d46744-6572-4d65-bfcd-d5202aae280b-kube-api-access-4x7vx\") pod \"70d46744-6572-4d65-bfcd-d5202aae280b\" (UID: \"70d46744-6572-4d65-bfcd-d5202aae280b\") "
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.301472    1702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d46744-6572-4d65-bfcd-d5202aae280b-kube-api-access-4x7vx" (OuterVolumeSpecName: "kube-api-access-4x7vx") pod "70d46744-6572-4d65-bfcd-d5202aae280b" (UID: "70d46744-6572-4d65-bfcd-d5202aae280b"). InnerVolumeSpecName "kube-api-access-4x7vx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.400098    1702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4hxv\" (UniqueName: \"kubernetes.io/projected/aedef12a-b05b-4a96-8ae6-df083e301089-kube-api-access-l4hxv\") pod \"aedef12a-b05b-4a96-8ae6-df083e301089\" (UID: \"aedef12a-b05b-4a96-8ae6-df083e301089\") "
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.400180    1702 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4x7vx\" (UniqueName: \"kubernetes.io/projected/70d46744-6572-4d65-bfcd-d5202aae280b-kube-api-access-4x7vx\") on node \"addons-636193\" DevicePath \"\""
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.401940    1702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aedef12a-b05b-4a96-8ae6-df083e301089-kube-api-access-l4hxv" (OuterVolumeSpecName: "kube-api-access-l4hxv") pod "aedef12a-b05b-4a96-8ae6-df083e301089" (UID: "aedef12a-b05b-4a96-8ae6-df083e301089"). InnerVolumeSpecName "kube-api-access-l4hxv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.501372    1702 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l4hxv\" (UniqueName: \"kubernetes.io/projected/aedef12a-b05b-4a96-8ae6-df083e301089-kube-api-access-l4hxv\") on node \"addons-636193\" DevicePath \"\""
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.680120    1702 scope.go:117] "RemoveContainer" containerID="96573c7d8b13abfbdc6c47673b7ab1dbbc93974a721e942cf9bc89114ad1a94b"
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.686590    1702 scope.go:117] "RemoveContainer" containerID="96573c7d8b13abfbdc6c47673b7ab1dbbc93974a721e942cf9bc89114ad1a94b"
	Jul 19 03:35:14 addons-636193 kubelet[1702]: E0719 03:35:14.687010    1702 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96573c7d8b13abfbdc6c47673b7ab1dbbc93974a721e942cf9bc89114ad1a94b\": not found" containerID="96573c7d8b13abfbdc6c47673b7ab1dbbc93974a721e942cf9bc89114ad1a94b"
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.687042    1702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96573c7d8b13abfbdc6c47673b7ab1dbbc93974a721e942cf9bc89114ad1a94b"} err="failed to get container status \"96573c7d8b13abfbdc6c47673b7ab1dbbc93974a721e942cf9bc89114ad1a94b\": rpc error: code = NotFound desc = an error occurred when try to find container \"96573c7d8b13abfbdc6c47673b7ab1dbbc93974a721e942cf9bc89114ad1a94b\": not found"
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.687065    1702 scope.go:117] "RemoveContainer" containerID="73310afdb7242d56c1f8ad219db2059c2cee386c87d74859c0d02ca66a37655a"
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.696524    1702 scope.go:117] "RemoveContainer" containerID="73310afdb7242d56c1f8ad219db2059c2cee386c87d74859c0d02ca66a37655a"
	Jul 19 03:35:14 addons-636193 kubelet[1702]: E0719 03:35:14.696960    1702 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73310afdb7242d56c1f8ad219db2059c2cee386c87d74859c0d02ca66a37655a\": not found" containerID="73310afdb7242d56c1f8ad219db2059c2cee386c87d74859c0d02ca66a37655a"
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.696999    1702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73310afdb7242d56c1f8ad219db2059c2cee386c87d74859c0d02ca66a37655a"} err="failed to get container status \"73310afdb7242d56c1f8ad219db2059c2cee386c87d74859c0d02ca66a37655a\": rpc error: code = NotFound desc = an error occurred when try to find container \"73310afdb7242d56c1f8ad219db2059c2cee386c87d74859c0d02ca66a37655a\": not found"
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.858424    1702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d46744-6572-4d65-bfcd-d5202aae280b" path="/var/lib/kubelet/pods/70d46744-6572-4d65-bfcd-d5202aae280b/volumes"
	Jul 19 03:35:14 addons-636193 kubelet[1702]: I0719 03:35:14.858791    1702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aedef12a-b05b-4a96-8ae6-df083e301089" path="/var/lib/kubelet/pods/aedef12a-b05b-4a96-8ae6-df083e301089/volumes"
	Jul 19 03:35:24 addons-636193 kubelet[1702]: E0719 03:35:24.857636    1702 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.11.1@sha256:e6439a12b52076965928e83b7b56aae6731231677b01e81818bce7fa5c60161a\\\"\"" pod="ingress-nginx/ingress-nginx-controller-6d9bd977d4-9rsfl" podUID="6fe47a7b-984e-4e00-9377-211a155d030e"
	
	
	==> storage-provisioner [3bc69ac3723127e3e2c63d15c8f1de6282606b7cd7f297c676f2ea9d14ce895e] <==
	I0719 03:27:48.368320       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 03:27:48.466380       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 03:27:48.466441       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 03:27:48.557573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 03:27:48.557808       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-636193_63e91626-df23-43b9-815b-d2449fe9555e!
	I0719 03:27:48.558861       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d964afb-76fb-48a2-920b-bcc8403e2c3d", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-636193_63e91626-df23-43b9-815b-d2449fe9555e became leader
	I0719 03:27:48.658400       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-636193_63e91626-df23-43b9-815b-d2449fe9555e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-636193 -n addons-636193
helpers_test.go:261: (dbg) Run:  kubectl --context addons-636193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-d6xmp ingress-nginx-admission-patch-7xmt7 ingress-nginx-controller-6d9bd977d4-9rsfl
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-636193 describe pod ingress-nginx-admission-create-d6xmp ingress-nginx-admission-patch-7xmt7 ingress-nginx-controller-6d9bd977d4-9rsfl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-636193 describe pod ingress-nginx-admission-create-d6xmp ingress-nginx-admission-patch-7xmt7 ingress-nginx-controller-6d9bd977d4-9rsfl: exit status 1 (57.491042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-d6xmp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7xmt7" not found
	Error from server (NotFound): pods "ingress-nginx-controller-6d9bd977d4-9rsfl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-636193 describe pod ingress-nginx-admission-create-d6xmp ingress-nginx-admission-patch-7xmt7 ingress-nginx-controller-6d9bd977d4-9rsfl: exit status 1
--- FAIL: TestAddons/parallel/Ingress (91.99s)

                                                
                                    

Test pass (309/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.79
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 12.77
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.3
18 TestDownloadOnly/v1.30.3/DeleteAll 0.52
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.3
21 TestDownloadOnly/v1.31.0-beta.0/json-events 40.79
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.19
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
29 TestDownloadOnlyKic 1.1
30 TestBinaryMirror 0.72
31 TestOffline 62.91
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 416.14
38 TestAddons/parallel/Registry 15.94
40 TestAddons/parallel/InspektorGadget 10.89
41 TestAddons/parallel/MetricsServer 5.78
42 TestAddons/parallel/HelmTiller 13.66
44 TestAddons/parallel/CSI 48.82
45 TestAddons/parallel/Headlamp 12.75
46 TestAddons/parallel/CloudSpanner 6.92
47 TestAddons/parallel/LocalPath 12.49
48 TestAddons/parallel/NvidiaDevicePlugin 6.6
49 TestAddons/parallel/Yakd 6
50 TestAddons/parallel/Volcano 37.42
53 TestAddons/serial/GCPAuth/Namespaces 0.11
54 TestAddons/StoppedEnableDisable 12.06
55 TestCertOptions 24.36
56 TestCertExpiration 212.73
58 TestForceSystemdFlag 25.44
59 TestForceSystemdEnv 36.22
60 TestDockerEnvContainerd 37.67
61 TestKVMDriverInstallOrUpdate 4.97
65 TestErrorSpam/setup 23.14
66 TestErrorSpam/start 0.55
67 TestErrorSpam/status 0.78
68 TestErrorSpam/pause 1.38
69 TestErrorSpam/unpause 1.38
70 TestErrorSpam/stop 1.32
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 50.45
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 4.96
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.8
82 TestFunctional/serial/CacheCmd/cache/add_local 2.07
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 43.89
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.32
93 TestFunctional/serial/LogsFileCmd 1.31
94 TestFunctional/serial/InvalidService 4.54
96 TestFunctional/parallel/ConfigCmd 0.35
97 TestFunctional/parallel/DashboardCmd 13.6
98 TestFunctional/parallel/DryRun 0.35
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 1.02
104 TestFunctional/parallel/ServiceCmdConnect 7.51
105 TestFunctional/parallel/AddonsCmd 0.13
106 TestFunctional/parallel/PersistentVolumeClaim 28.73
108 TestFunctional/parallel/SSHCmd 0.5
109 TestFunctional/parallel/CpCmd 1.87
110 TestFunctional/parallel/MySQL 19.28
111 TestFunctional/parallel/FileSync 0.27
112 TestFunctional/parallel/CertSync 1.48
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
120 TestFunctional/parallel/License 0.64
121 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
123 TestFunctional/parallel/ProfileCmd/profile_list 0.45
124 TestFunctional/parallel/MountCmd/any-port 8.88
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.21
131 TestFunctional/parallel/ServiceCmd/List 0.41
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
134 TestFunctional/parallel/ServiceCmd/Format 0.35
135 TestFunctional/parallel/ServiceCmd/URL 0.31
136 TestFunctional/parallel/MountCmd/specific-port 1.9
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.84
138 TestFunctional/parallel/Version/short 0.06
139 TestFunctional/parallel/Version/components 0.69
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
144 TestFunctional/parallel/ImageCommands/ImageBuild 4.42
145 TestFunctional/parallel/ImageCommands/Setup 1.97
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.69
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.21
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.6
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.64
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.41
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.58
158 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
159 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
160 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.85
162 TestFunctional/delete_echo-server_images 0.03
163 TestFunctional/delete_my-image_image 0.01
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 101.16
169 TestMultiControlPlane/serial/DeployApp 32.01
170 TestMultiControlPlane/serial/PingHostFromPods 0.95
171 TestMultiControlPlane/serial/AddWorkerNode 21.28
172 TestMultiControlPlane/serial/NodeLabels 0.06
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.59
174 TestMultiControlPlane/serial/CopyFile 14.53
175 TestMultiControlPlane/serial/StopSecondaryNode 12.43
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.46
177 TestMultiControlPlane/serial/RestartSecondaryNode 15.46
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.6
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.1
180 TestMultiControlPlane/serial/DeleteSecondaryNode 9.77
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.43
182 TestMultiControlPlane/serial/StopCluster 35.51
183 TestMultiControlPlane/serial/RestartCluster 74.82
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.44
185 TestMultiControlPlane/serial/AddSecondaryNode 35.56
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.6
190 TestJSONOutput/start/Command 52.18
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.62
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.55
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.7
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.18
215 TestKicCustomNetwork/create_custom_network 35.16
216 TestKicCustomNetwork/use_default_bridge_network 22.33
217 TestKicExistingNetwork 22.31
218 TestKicCustomSubnet 23
219 TestKicStaticIP 25.6
220 TestMainNoArgs 0.04
221 TestMinikubeProfile 43.95
224 TestMountStart/serial/StartWithMountFirst 5.56
225 TestMountStart/serial/VerifyMountFirst 0.22
226 TestMountStart/serial/StartWithMountSecond 5.13
227 TestMountStart/serial/VerifyMountSecond 0.22
228 TestMountStart/serial/DeleteFirst 1.54
229 TestMountStart/serial/VerifyMountPostDelete 0.22
230 TestMountStart/serial/Stop 1.17
231 TestMountStart/serial/RestartStopped 6.98
232 TestMountStart/serial/VerifyMountPostStop 0.23
235 TestMultiNode/serial/FreshStart2Nodes 64.87
236 TestMultiNode/serial/DeployApp2Nodes 17.77
237 TestMultiNode/serial/PingHostFrom2Pods 0.64
238 TestMultiNode/serial/AddNode 17.27
239 TestMultiNode/serial/MultiNodeLabels 0.06
240 TestMultiNode/serial/ProfileList 0.27
241 TestMultiNode/serial/CopyFile 8.37
242 TestMultiNode/serial/StopNode 2.04
243 TestMultiNode/serial/StartAfterStop 8.44
244 TestMultiNode/serial/RestartKeepsNodes 82.38
245 TestMultiNode/serial/DeleteNode 5.01
246 TestMultiNode/serial/StopMultiNode 23.68
247 TestMultiNode/serial/RestartMultiNode 52.2
248 TestMultiNode/serial/ValidateNameConflict 22.08
253 TestPreload 153.39
255 TestScheduledStopUnix 97.13
258 TestInsufficientStorage 12.31
259 TestRunningBinaryUpgrade 83.88
261 TestKubernetesUpgrade 315.42
262 TestMissingContainerUpgrade 185.24
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
268 TestNoKubernetes/serial/StartWithK8s 31.26
273 TestNetworkPlugins/group/false 7.65
277 TestNoKubernetes/serial/StartWithStopK8s 22.78
278 TestNoKubernetes/serial/Start 5.98
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
280 TestNoKubernetes/serial/ProfileList 9.13
281 TestStoppedBinaryUpgrade/Setup 2.68
282 TestStoppedBinaryUpgrade/Upgrade 119.87
283 TestNoKubernetes/serial/Stop 1.18
284 TestNoKubernetes/serial/StartNoArgs 6.36
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
294 TestPause/serial/Start 53.99
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
296 TestPause/serial/SecondStartNoReconfiguration 5.54
297 TestPause/serial/Pause 0.71
298 TestPause/serial/VerifyStatus 0.3
299 TestPause/serial/Unpause 0.61
300 TestPause/serial/PauseAgain 0.77
301 TestPause/serial/DeletePaused 2.93
302 TestPause/serial/VerifyDeletedResources 0.53
303 TestNetworkPlugins/group/auto/Start 53.9
304 TestNetworkPlugins/group/kindnet/Start 52.15
305 TestNetworkPlugins/group/auto/KubeletFlags 0.26
306 TestNetworkPlugins/group/auto/NetCatPod 9.19
307 TestNetworkPlugins/group/auto/DNS 0.11
308 TestNetworkPlugins/group/auto/Localhost 0.1
309 TestNetworkPlugins/group/auto/HairPin 0.09
310 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
312 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
313 TestNetworkPlugins/group/calico/Start 60
314 TestNetworkPlugins/group/kindnet/DNS 0.13
315 TestNetworkPlugins/group/kindnet/Localhost 0.11
316 TestNetworkPlugins/group/kindnet/HairPin 0.1
317 TestNetworkPlugins/group/custom-flannel/Start 51.37
318 TestNetworkPlugins/group/enable-default-cni/Start 40.13
319 TestNetworkPlugins/group/calico/ControllerPod 6.01
320 TestNetworkPlugins/group/calico/KubeletFlags 0.25
321 TestNetworkPlugins/group/calico/NetCatPod 9.17
322 TestNetworkPlugins/group/calico/DNS 0.12
323 TestNetworkPlugins/group/calico/Localhost 0.1
324 TestNetworkPlugins/group/calico/HairPin 0.1
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.19
327 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
328 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.22
329 TestNetworkPlugins/group/custom-flannel/DNS 0.15
330 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
331 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
335 TestNetworkPlugins/group/flannel/Start 56.7
336 TestNetworkPlugins/group/bridge/Start 40.22
338 TestStartStop/group/old-k8s-version/serial/FirstStart 134.67
340 TestStartStop/group/no-preload/serial/FirstStart 75.72
341 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
342 TestNetworkPlugins/group/bridge/NetCatPod 9.57
343 TestNetworkPlugins/group/bridge/DNS 0.15
344 TestNetworkPlugins/group/bridge/Localhost 0.1
345 TestNetworkPlugins/group/bridge/HairPin 0.1
346 TestNetworkPlugins/group/flannel/ControllerPod 6.01
347 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
348 TestNetworkPlugins/group/flannel/NetCatPod 8.23
349 TestNetworkPlugins/group/flannel/DNS 0.14
350 TestNetworkPlugins/group/flannel/Localhost 0.12
351 TestNetworkPlugins/group/flannel/HairPin 0.15
353 TestStartStop/group/embed-certs/serial/FirstStart 54.34
355 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 66.54
356 TestStartStop/group/no-preload/serial/DeployApp 9.26
357 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
358 TestStartStop/group/no-preload/serial/Stop 12.05
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
360 TestStartStop/group/no-preload/serial/SecondStart 262.68
361 TestStartStop/group/embed-certs/serial/DeployApp 9.27
362 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
363 TestStartStop/group/embed-certs/serial/Stop 12.25
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
365 TestStartStop/group/embed-certs/serial/SecondStart 262.71
366 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
367 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
369 TestStartStop/group/old-k8s-version/serial/Stop 12.21
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
371 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.05
372 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
373 TestStartStop/group/old-k8s-version/serial/SecondStart 57.77
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.29
376 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 23.01
377 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
378 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
379 TestStartStop/group/old-k8s-version/serial/Pause 2.43
381 TestStartStop/group/newest-cni/serial/FirstStart 26.6
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.75
384 TestStartStop/group/newest-cni/serial/Stop 1.18
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
386 TestStartStop/group/newest-cni/serial/SecondStart 12.82
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
390 TestStartStop/group/newest-cni/serial/Pause 2.59
391 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
392 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
393 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
394 TestStartStop/group/no-preload/serial/Pause 2.67
395 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
396 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
398 TestStartStop/group/embed-certs/serial/Pause 2.44
399 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
400 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
401 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
402 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.51
x
+
TestDownloadOnly/v1.20.0/json-events (21.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-972036 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-972036 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (21.788166449s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (21.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-972036
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-972036: exit status 85 (54.768882ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-972036 | jenkins | v1.33.1 | 19 Jul 24 03:25 UTC |          |
	|         | -p download-only-972036        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:25:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:25:34.676149   11912 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:25:34.676234   11912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:25:34.676241   11912 out.go:304] Setting ErrFile to fd 2...
	I0719 03:25:34.676245   11912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:25:34.676402   11912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	W0719 03:25:34.676499   11912 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19302-5122/.minikube/config/config.json: open /home/jenkins/minikube-integration/19302-5122/.minikube/config/config.json: no such file or directory
	I0719 03:25:34.677018   11912 out.go:298] Setting JSON to true
	I0719 03:25:34.677850   11912 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":479,"bootTime":1721359056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 03:25:34.677907   11912 start.go:139] virtualization: kvm guest
	I0719 03:25:34.680177   11912 out.go:97] [download-only-972036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0719 03:25:34.680274   11912 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19302-5122/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 03:25:34.680320   11912 notify.go:220] Checking for updates...
	I0719 03:25:34.681711   11912 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:25:34.683159   11912 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:25:34.684308   11912 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	I0719 03:25:34.685447   11912 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	I0719 03:25:34.686599   11912 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 03:25:34.688581   11912 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:25:34.688771   11912 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:25:34.709145   11912 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 03:25:34.709222   11912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:25:35.042212   11912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-19 03:25:35.03387078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:25:35.042313   11912 docker.go:307] overlay module found
	I0719 03:25:35.043857   11912 out.go:97] Using the docker driver based on user configuration
	I0719 03:25:35.043879   11912 start.go:297] selected driver: docker
	I0719 03:25:35.043889   11912 start.go:901] validating driver "docker" against <nil>
	I0719 03:25:35.043972   11912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:25:35.092011   11912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-19 03:25:35.083908004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:25:35.092179   11912 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:25:35.092616   11912 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0719 03:25:35.092762   11912 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:25:35.094465   11912 out.go:169] Using Docker driver with root privileges
	I0719 03:25:35.095650   11912 cni.go:84] Creating CNI manager for ""
	I0719 03:25:35.095668   11912 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0719 03:25:35.095678   11912 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 03:25:35.095743   11912 start.go:340] cluster config:
	{Name:download-only-972036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-972036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:25:35.097118   11912 out.go:97] Starting "download-only-972036" primary control-plane node in "download-only-972036" cluster
	I0719 03:25:35.097132   11912 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0719 03:25:35.098350   11912 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0719 03:25:35.098370   11912 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0719 03:25:35.098476   11912 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 03:25:35.113127   11912 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 03:25:35.113290   11912 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 03:25:35.113374   11912 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 03:25:35.207399   11912 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0719 03:25:35.207428   11912 cache.go:56] Caching tarball of preloaded images
	I0719 03:25:35.207573   11912 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0719 03:25:35.209487   11912 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 03:25:35.209501   11912 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0719 03:25:35.320653   11912 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/19302-5122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0719 03:25:48.390845   11912 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0719 03:25:48.390940   11912 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19302-5122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0719 03:25:49.138360   11912 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 03:25:49.310330   11912 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0719 03:25:49.310665   11912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/download-only-972036/config.json ...
	I0719 03:25:49.310698   11912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/download-only-972036/config.json: {Name:mk55a54a12eafd69095d1afc9ca641dc1ac8a94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:25:49.310892   11912 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0719 03:25:49.311068   11912 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19302-5122/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-972036 host does not exist
	  To start a cluster, run: "minikube start -p download-only-972036"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-972036
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-601538 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-601538 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.76973606s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-601538
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-601538: exit status 85 (302.854474ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-972036 | jenkins | v1.33.1 | 19 Jul 24 03:25 UTC |                     |
	|         | -p download-only-972036        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Jul 24 03:25 UTC | 19 Jul 24 03:25 UTC |
	| delete  | -p download-only-972036        | download-only-972036 | jenkins | v1.33.1 | 19 Jul 24 03:25 UTC | 19 Jul 24 03:25 UTC |
	| start   | -o=json --download-only        | download-only-601538 | jenkins | v1.33.1 | 19 Jul 24 03:25 UTC |                     |
	|         | -p download-only-601538        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:25:56
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:25:56.840658   12314 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:25:56.840872   12314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:25:56.840879   12314 out.go:304] Setting ErrFile to fd 2...
	I0719 03:25:56.840884   12314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:25:56.841032   12314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	I0719 03:25:56.841535   12314 out.go:298] Setting JSON to true
	I0719 03:25:56.842327   12314 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":501,"bootTime":1721359056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 03:25:56.842375   12314 start.go:139] virtualization: kvm guest
	I0719 03:25:56.844208   12314 out.go:97] [download-only-601538] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 03:25:56.844341   12314 notify.go:220] Checking for updates...
	I0719 03:25:56.845557   12314 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:25:56.846780   12314 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:25:56.847839   12314 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	I0719 03:25:56.848825   12314 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	I0719 03:25:56.849888   12314 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 03:25:56.852217   12314 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:25:56.852418   12314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:25:56.872651   12314 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 03:25:56.872737   12314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:25:56.918318   12314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-19 03:25:56.909336364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:25:56.918418   12314 docker.go:307] overlay module found
	I0719 03:25:56.919974   12314 out.go:97] Using the docker driver based on user configuration
	I0719 03:25:56.920003   12314 start.go:297] selected driver: docker
	I0719 03:25:56.920015   12314 start.go:901] validating driver "docker" against <nil>
	I0719 03:25:56.920095   12314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:25:56.966052   12314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-19 03:25:56.95651849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:25:56.966262   12314 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:25:56.966808   12314 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0719 03:25:56.967008   12314 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:25:56.968598   12314 out.go:169] Using Docker driver with root privileges
	I0719 03:25:56.969667   12314 cni.go:84] Creating CNI manager for ""
	I0719 03:25:56.969691   12314 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0719 03:25:56.969703   12314 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 03:25:56.969789   12314 start.go:340] cluster config:
	{Name:download-only-601538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-601538 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:25:56.971003   12314 out.go:97] Starting "download-only-601538" primary control-plane node in "download-only-601538" cluster
	I0719 03:25:56.971029   12314 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0719 03:25:56.972026   12314 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0719 03:25:56.972071   12314 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0719 03:25:56.972097   12314 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 03:25:56.988115   12314 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 03:25:56.988254   12314 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 03:25:56.988277   12314 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 03:25:56.988287   12314 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 03:25:56.988296   12314 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 03:25:57.073309   12314 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4
	I0719 03:25:57.073367   12314 cache.go:56] Caching tarball of preloaded images
	I0719 03:25:57.073545   12314 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0719 03:25:57.075234   12314 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 03:25:57.075259   12314 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4 ...
	I0719 03:25:57.185519   12314 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:1b8c063785761b3e6ff228c42e3a8cf1 -> /home/jenkins/minikube-integration/19302-5122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-601538 host does not exist
	  To start a cluster, run: "minikube start -p download-only-601538"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-601538
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (40.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-548503 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-548503 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (40.789709577s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (40.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-548503
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-548503: exit status 85 (56.234556ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-972036 | jenkins | v1.33.1 | 19 Jul 24 03:25 UTC |                     |
	|         | -p download-only-972036             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 03:25 UTC | 19 Jul 24 03:25 UTC |
	| delete  | -p download-only-972036             | download-only-972036 | jenkins | v1.33.1 | 19 Jul 24 03:25 UTC | 19 Jul 24 03:25 UTC |
	| start   | -o=json --download-only             | download-only-601538 | jenkins | v1.33.1 | 19 Jul 24 03:25 UTC |                     |
	|         | -p download-only-601538             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| delete  | -p download-only-601538             | download-only-601538 | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| start   | -o=json --download-only             | download-only-548503 | jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |                     |
	|         | -p download-only-548503             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:26:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:26:10.738287   12692 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:26:10.738458   12692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:26:10.738471   12692 out.go:304] Setting ErrFile to fd 2...
	I0719 03:26:10.738478   12692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:26:10.738946   12692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	I0719 03:26:10.739509   12692 out.go:298] Setting JSON to true
	I0719 03:26:10.740273   12692 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":515,"bootTime":1721359056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 03:26:10.740330   12692 start.go:139] virtualization: kvm guest
	I0719 03:26:10.771117   12692 out.go:97] [download-only-548503] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 03:26:10.771245   12692 notify.go:220] Checking for updates...
	I0719 03:26:10.813609   12692 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:26:10.834418   12692 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:26:10.970603   12692 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	I0719 03:26:11.091753   12692 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	I0719 03:26:11.157843   12692 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 03:26:11.262361   12692 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:26:11.262704   12692 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:26:11.283805   12692 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 03:26:11.283923   12692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:26:11.332608   12692 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-19 03:26:11.323638911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:26:11.332711   12692 docker.go:307] overlay module found
	I0719 03:26:11.367169   12692 out.go:97] Using the docker driver based on user configuration
	I0719 03:26:11.367210   12692 start.go:297] selected driver: docker
	I0719 03:26:11.367220   12692 start.go:901] validating driver "docker" against <nil>
	I0719 03:26:11.367320   12692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:26:11.415159   12692 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-19 03:26:11.406844374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:26:11.415349   12692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:26:11.415816   12692 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0719 03:26:11.415981   12692 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:26:11.458977   12692 out.go:169] Using Docker driver with root privileges
	I0719 03:26:11.522414   12692 cni.go:84] Creating CNI manager for ""
	I0719 03:26:11.522456   12692 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0719 03:26:11.522472   12692 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 03:26:11.522578   12692 start.go:340] cluster config:
	{Name:download-only-548503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-548503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0719 03:26:11.596517   12692 out.go:97] Starting "download-only-548503" primary control-plane node in "download-only-548503" cluster
	I0719 03:26:11.596622   12692 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0719 03:26:11.617087   12692 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0719 03:26:11.617148   12692 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime containerd
	I0719 03:26:11.617239   12692 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 03:26:11.632766   12692 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 03:26:11.632893   12692 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 03:26:11.632907   12692 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 03:26:11.632912   12692 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 03:26:11.632921   12692 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 03:26:11.727398   12692 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I0719 03:26:11.727429   12692 cache.go:56] Caching tarball of preloaded images
	I0719 03:26:11.727597   12692 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime containerd
	I0719 03:26:11.773532   12692 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 03:26:11.773583   12692 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0719 03:26:11.883901   12692 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:317e542de842a84eade9a0e3b4ea7005 -> /home/jenkins/minikube-integration/19302-5122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I0719 03:26:22.390296   12692 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0719 03:26:22.390385   12692 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19302-5122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0719 03:26:23.130045   12692 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on containerd
	I0719 03:26:23.130363   12692 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/download-only-548503/config.json ...
	I0719 03:26:23.130389   12692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/download-only-548503/config.json: {Name:mkdb338c807f7a84984ee2525003e125cefcd5b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:26:23.130573   12692 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime containerd
	I0719 03:26:23.130765   12692 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19302-5122/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-548503 host does not exist
	  To start a cluster, run: "minikube start -p download-only-548503"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-548503
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.1s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-927594 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-927594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-927594
--- PASS: TestDownloadOnlyKic (1.10s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-415059 --alsologtostderr --binary-mirror http://127.0.0.1:41655 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-415059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-415059
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (62.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-995424 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-995424 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m0.040748761s)
helpers_test.go:175: Cleaning up "offline-containerd-995424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-995424
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-995424: (2.871590482s)
--- PASS: TestOffline (62.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-636193
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-636193: exit status 85 (46.930962ms)

                                                
                                                
-- stdout --
	* Profile "addons-636193" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-636193"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-636193
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-636193: exit status 85 (47.968051ms)

                                                
                                                
-- stdout --
	* Profile "addons-636193" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-636193"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (416.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-636193 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-636193 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m56.138695166s)
--- PASS: TestAddons/Setup (416.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 14.328688ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-2crwh" [6b5c99b6-8fa3-406b-b438-81cf572e6546] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004742996s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qbxh9" [eae5240d-9ec3-42ce-969d-004f729dcd15] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004066093s
addons_test.go:342: (dbg) Run:  kubectl --context addons-636193 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-636193 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-636193 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.099970226s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-636193 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-636193 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-r5znn" [a72e7f94-f7df-4ec1-bd1b-60982943ade0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004878344s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-636193
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-636193: (5.88508727s)
--- PASS: TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.076793ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-7d8vw" [8725dfae-09aa-48d9-b58e-32c4ab8bd284] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004092766s
addons_test.go:417: (dbg) Run:  kubectl --context addons-636193 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-636193 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.66s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 11.428971ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-tq89t" [4fbc4740-9ca6-4d39-951e-8659619897a4] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005055283s
addons_test.go:475: (dbg) Run:  kubectl --context addons-636193 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-636193 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.11945175s)
addons_test.go:480: kubectl --context addons-636193 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:475: (dbg) Run:  kubectl --context addons-636193 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-636193 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.285358231s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-636193 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 9.732762ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-636193 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-636193 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [26cef914-f36a-48f7-b251-00db8952b9c0] Pending
helpers_test.go:344: "task-pv-pod" [26cef914-f36a-48f7-b251-00db8952b9c0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [26cef914-f36a-48f7-b251-00db8952b9c0] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003507506s
addons_test.go:586: (dbg) Run:  kubectl --context addons-636193 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-636193 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-636193 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-636193 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-636193 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-636193 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-636193 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f57e3ebc-c1e5-4962-a7a9-31a6a672c429] Pending
helpers_test.go:344: "task-pv-pod-restore" [f57e3ebc-c1e5-4962-a7a9-31a6a672c429] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f57e3ebc-c1e5-4962-a7a9-31a6a672c429] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003168181s
addons_test.go:628: (dbg) Run:  kubectl --context addons-636193 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-636193 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-636193 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-636193 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-636193 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.519469352s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-636193 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-636193 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-tlxrd" [cca74fdb-21a7-47a3-8629-4796e2ea8d63] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-tlxrd" [cca74fdb-21a7-47a3-8629-4796e2ea8d63] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.002345296s
--- PASS: TestAddons/parallel/Headlamp (12.75s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-rllmf" [0e23d722-b4d8-437d-adbe-9921d1954056] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.046477493s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-636193
--- PASS: TestAddons/parallel/CloudSpanner (6.92s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-636193 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-636193 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636193 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [94844480-e4ab-4cf2-ad68-6fb8d811e809] Pending
helpers_test.go:344: "test-local-path" [94844480-e4ab-4cf2-ad68-6fb8d811e809] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [94844480-e4ab-4cf2-ad68-6fb8d811e809] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [94844480-e4ab-4cf2-ad68-6fb8d811e809] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003569261s
addons_test.go:992: (dbg) Run:  kubectl --context addons-636193 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-636193 ssh "cat /opt/local-path-provisioner/pvc-b1bb7e8e-cb3c-4dfa-bcf4-226a66c3e989_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-636193 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-636193 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-636193 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wt852" [7e9a607b-91f6-4f69-874a-07f2a9d578c8] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003846259s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-636193
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-rqsh5" [de58c324-895f-4e1b-9581-39c32ab788f9] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004022833s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (37.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 12.906377ms
addons_test.go:889: volcano-scheduler stabilized in 13.372311ms
addons_test.go:897: volcano-admission stabilized in 13.812886ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-sq4vq" [205b491d-baef-4f0d-a299-076bd6f0ca9d] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.003185748s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-g5ctj" [4122bba9-52e8-4f46-87e5-dc2898cc22d3] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.003459703s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-dltwc" [a64efc41-fe03-4a37-85d7-51fff209581e] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003107308s
addons_test.go:924: (dbg) Run:  kubectl --context addons-636193 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-636193 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-636193 get vcjob -n my-volcano
2024/07/19 03:34:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [83ce8531-7924-4e93-9fef-1adbc98db5ee] Pending
helpers_test.go:344: "test-job-nginx-0" [83ce8531-7924-4e93-9fef-1adbc98db5ee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [83ce8531-7924-4e93-9fef-1adbc98db5ee] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 12.002724347s
addons_test.go:960: (dbg) Run:  out/minikube-linux-amd64 -p addons-636193 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-linux-amd64 -p addons-636193 addons disable volcano --alsologtostderr -v=1: (10.065849931s)
--- PASS: TestAddons/parallel/Volcano (37.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-636193 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-636193 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-636193
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-636193: (11.829692291s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-636193
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-636193
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-636193
--- PASS: TestAddons/StoppedEnableDisable (12.06s)

                                                
                                    
x
+
TestCertOptions (24.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-763292 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-763292 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (21.78947434s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-763292 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-763292 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-763292 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-763292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-763292
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-763292: (2.01712874s)
--- PASS: TestCertOptions (24.36s)

                                                
                                    
x
+
TestCertExpiration (212.73s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-394011 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-394011 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (24.759933354s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-394011 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-394011 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.201802478s)
helpers_test.go:175: Cleaning up "cert-expiration-394011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-394011
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-394011: (2.770090561s)
--- PASS: TestCertExpiration (212.73s)

                                                
                                    
x
+
TestForceSystemdFlag (25.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-251206 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-251206 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.933779537s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-251206 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-251206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-251206
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-251206: (2.243758347s)
--- PASS: TestForceSystemdFlag (25.44s)

                                                
                                    
x
+
TestForceSystemdEnv (36.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-039864 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-039864 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.559935767s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-039864 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-039864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-039864
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-039864: (3.387506558s)
--- PASS: TestForceSystemdEnv (36.22s)

                                                
                                    
x
+
TestDockerEnvContainerd (37.67s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-185822 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-185822 --driver=docker  --container-runtime=containerd: (22.058000011s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-185822"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-r7vWQVzzWfJf/agent.37553" SSH_AGENT_PID="37554" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-r7vWQVzzWfJf/agent.37553" SSH_AGENT_PID="37554" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-r7vWQVzzWfJf/agent.37553" SSH_AGENT_PID="37554" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.946278646s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-r7vWQVzzWfJf/agent.37553" SSH_AGENT_PID="37554" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-185822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-185822
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-185822: (1.814770674s)
--- PASS: TestDockerEnvContainerd (37.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.97s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.97s)

                                                
                                    
x
+
TestErrorSpam/setup (23.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-058293 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-058293 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-058293 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-058293 --driver=docker  --container-runtime=containerd: (23.137811748s)
--- PASS: TestErrorSpam/setup (23.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 unpause
--- PASS: TestErrorSpam/unpause (1.38s)

                                                
                                    
x
+
TestErrorSpam/stop (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 stop: (1.160145287s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-058293 --log_dir /tmp/nospam-058293 stop
--- PASS: TestErrorSpam/stop (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19302-5122/.minikube/files/etc/test/nested/copy/11900/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-966390 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-966390 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (50.446897252s)
--- PASS: TestFunctional/serial/StartWithProxy (50.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (4.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-966390 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-966390 --alsologtostderr -v=8: (4.95737843s)
functional_test.go:659: soft start took 4.958118287s for "functional-966390" cluster.
--- PASS: TestFunctional/serial/SoftStart (4.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-966390 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-966390 cache add registry.k8s.io/pause:3.3: (1.038694171s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-966390 /tmp/TestFunctionalserialCacheCmdcacheadd_local3280286467/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 cache add minikube-local-cache-test:functional-966390
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-966390 cache add minikube-local-cache-test:functional-966390: (1.785836234s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 cache delete minikube-local-cache-test:functional-966390
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-966390
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-966390 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (245.318147ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 kubectl -- --context functional-966390 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-966390 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-966390 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0719 03:38:50.236459   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:38:50.242327   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:38:50.252652   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:38:50.272919   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:38:50.313181   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:38:50.393620   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:38:50.554039   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:38:50.874601   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:38:51.515529   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:38:52.795857   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:38:55.356629   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:39:00.477136   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-966390 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.892166685s)
functional_test.go:757: restart took 43.892283304s for "functional-966390" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-966390 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 logs
E0719 03:39:10.718136   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-966390 logs: (1.316044882s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 logs --file /tmp/TestFunctionalserialLogsFileCmd2937119690/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-966390 logs --file /tmp/TestFunctionalserialLogsFileCmd2937119690/001/logs.txt: (1.305767019s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-966390 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-966390
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-966390: exit status 115 (297.738953ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31048 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-966390 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-966390 delete -f testdata/invalidsvc.yaml: (1.074823784s)
--- PASS: TestFunctional/serial/InvalidService (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-966390 config get cpus: exit status 14 (76.135742ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-966390 config get cpus: exit status 14 (52.398705ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-966390 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-966390 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 58174: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-966390 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-966390 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (163.682106ms)

                                                
                                                
-- stdout --
	* [functional-966390] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 03:39:26.488984   56881 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:39:26.489200   56881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:39:26.489224   56881 out.go:304] Setting ErrFile to fd 2...
	I0719 03:39:26.489241   56881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:39:26.489536   56881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	I0719 03:39:26.490239   56881 out.go:298] Setting JSON to false
	I0719 03:39:26.491594   56881 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1310,"bootTime":1721359056,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 03:39:26.491675   56881 start.go:139] virtualization: kvm guest
	I0719 03:39:26.493991   56881 out.go:177] * [functional-966390] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 03:39:26.495450   56881 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:39:26.495478   56881 notify.go:220] Checking for updates...
	I0719 03:39:26.497703   56881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:39:26.498934   56881 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	I0719 03:39:26.500313   56881 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	I0719 03:39:26.501635   56881 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 03:39:26.502893   56881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:39:26.504650   56881 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 03:39:26.505296   56881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:39:26.537011   56881 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 03:39:26.537157   56881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:39:26.599139   56881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-19 03:39:26.588394736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:39:26.599270   56881 docker.go:307] overlay module found
	I0719 03:39:26.601901   56881 out.go:177] * Using the docker driver based on existing profile
	I0719 03:39:26.603279   56881 start.go:297] selected driver: docker
	I0719 03:39:26.603303   56881 start.go:901] validating driver "docker" against &{Name:functional-966390 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-966390 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:39:26.603415   56881 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:39:26.605681   56881 out.go:177] 
	W0719 03:39:26.606954   56881 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 03:39:26.608169   56881 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-966390 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-966390 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-966390 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (128.661233ms)

                                                
                                                
-- stdout --
	* [functional-966390] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 03:39:26.836150   57263 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:39:26.836376   57263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:39:26.836384   57263 out.go:304] Setting ErrFile to fd 2...
	I0719 03:39:26.836389   57263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:39:26.836649   57263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	I0719 03:39:26.837138   57263 out.go:298] Setting JSON to false
	I0719 03:39:26.838061   57263 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1311,"bootTime":1721359056,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 03:39:26.838114   57263 start.go:139] virtualization: kvm guest
	I0719 03:39:26.839936   57263 out.go:177] * [functional-966390] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0719 03:39:26.841118   57263 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:39:26.841185   57263 notify.go:220] Checking for updates...
	I0719 03:39:26.843220   57263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:39:26.844411   57263 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	I0719 03:39:26.845618   57263 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	I0719 03:39:26.846813   57263 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 03:39:26.847948   57263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:39:26.849374   57263 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 03:39:26.849806   57263 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:39:26.870948   57263 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 03:39:26.871070   57263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:39:26.916073   57263 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-19 03:39:26.907062212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:39:26.916175   57263 docker.go:307] overlay module found
	I0719 03:39:26.917897   57263 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0719 03:39:26.918938   57263 start.go:297] selected driver: docker
	I0719 03:39:26.918954   57263 start.go:901] validating driver "docker" against &{Name:functional-966390 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-966390 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:39:26.919063   57263 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:39:26.920927   57263 out.go:177] 
	W0719 03:39:26.922217   57263 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 03:39:26.923482   57263 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-966390 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-966390 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-2sxxb" [c441d208-34b3-4c4d-bd8b-619ce53e8ae3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-2sxxb" [c441d208-34b3-4c4d-bd8b-619ce53e8ae3] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00345855s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30619
functional_test.go:1671: http://192.168.49.2:30619: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-2sxxb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30619
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [274c6a5b-b0f0-4741-8fcb-9621dda49da1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003909731s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-966390 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-966390 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-966390 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-966390 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4754bb89-03cb-4170-96ae-711066c65e33] Pending
helpers_test.go:344: "sp-pod" [4754bb89-03cb-4170-96ae-711066c65e33] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4754bb89-03cb-4170-96ae-711066c65e33] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004048873s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-966390 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-966390 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-966390 delete -f testdata/storage-provisioner/pod.yaml: (1.98734296s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-966390 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c732cd2e-6d20-48f1-884a-04c22f829b89] Pending
helpers_test.go:344: "sp-pod" [c732cd2e-6d20-48f1-884a-04c22f829b89] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c732cd2e-6d20-48f1-884a-04c22f829b89] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003767907s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-966390 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh -n functional-966390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 cp functional-966390:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2892983379/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh -n functional-966390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh -n functional-966390 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-966390 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-bpd7w" [5c7c2832-8173-41b6-b1d6-711c96bcefa7] Pending
helpers_test.go:344: "mysql-64454c8b5c-bpd7w" [5c7c2832-8173-41b6-b1d6-711c96bcefa7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-bpd7w" [5c7c2832-8173-41b6-b1d6-711c96bcefa7] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.004641477s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-966390 exec mysql-64454c8b5c-bpd7w -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-966390 exec mysql-64454c8b5c-bpd7w -- mysql -ppassword -e "show databases;": exit status 1 (91.273438ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-966390 exec mysql-64454c8b5c-bpd7w -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-966390 exec mysql-64454c8b5c-bpd7w -- mysql -ppassword -e "show databases;": exit status 1 (93.344316ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-966390 exec mysql-64454c8b5c-bpd7w -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-966390 exec mysql-64454c8b5c-bpd7w -- mysql -ppassword -e "show databases;": exit status 1 (90.474529ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-966390 exec mysql-64454c8b5c-bpd7w -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11900/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo cat /etc/test/nested/copy/11900/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11900.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo cat /etc/ssl/certs/11900.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11900.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo cat /usr/share/ca-certificates/11900.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/119002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo cat /etc/ssl/certs/119002.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/119002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo cat /usr/share/ca-certificates/119002.pem"
2024/07/19 03:39:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-966390 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo systemctl is-active docker"
E0719 03:39:31.198860   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-966390 ssh "sudo systemctl is-active docker": exit status 1 (227.648687ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-966390 ssh "sudo systemctl is-active crio": exit status 1 (227.830851ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-966390 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-966390 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-vvwsb" [a9cba2f7-4096-4438-b9dd-db599da8d431] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-vvwsb" [a9cba2f7-4096-4438-b9dd-db599da8d431] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003530932s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "357.055054ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "88.831548ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdany-port2631580066/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721360357597129658" to /tmp/TestFunctionalparallelMountCmdany-port2631580066/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721360357597129658" to /tmp/TestFunctionalparallelMountCmdany-port2631580066/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721360357597129658" to /tmp/TestFunctionalparallelMountCmdany-port2631580066/001/test-1721360357597129658
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.795731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 03:39 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 03:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 03:39 test-1721360357597129658
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh cat /mount-9p/test-1721360357597129658
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-966390 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8e075b00-d86f-4ec9-a5a5-5a9ebcb0f692] Pending
helpers_test.go:344: "busybox-mount" [8e075b00-d86f-4ec9-a5a5-5a9ebcb0f692] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8e075b00-d86f-4ec9-a5a5-5a9ebcb0f692] Running
helpers_test.go:344: "busybox-mount" [8e075b00-d86f-4ec9-a5a5-5a9ebcb0f692] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8e075b00-d86f-4ec9-a5a5-5a9ebcb0f692] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004469232s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-966390 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdany-port2631580066/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.88s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "417.007352ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "54.051452ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-966390 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-966390 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-966390 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 55431: os: process already finished
helpers_test.go:502: unable to terminate pid 55127: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-966390 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-966390 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-966390 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d28d1037-51be-4a29-8495-1757ea94bf6a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d28d1037-51be-4a29-8495-1757ea94bf6a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003841339s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 service list -o json
functional_test.go:1490: Took "283.723344ms" to run "out/minikube-linux-amd64 -p functional-966390 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30972
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30972
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdspecific-port4088378685/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.192257ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdspecific-port4088378685/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-966390 ssh "sudo umount -f /mount-9p": exit status 1 (289.580135ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-966390 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdspecific-port4088378685/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2959325742/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2959325742/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2959325742/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T" /mount1: exit status 1 (490.635444ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-966390 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2959325742/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2959325742/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-966390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2959325742/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-966390 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-966390
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-966390
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-966390 image ls --format short --alsologtostderr:
I0719 03:39:41.694499   61422 out.go:291] Setting OutFile to fd 1 ...
I0719 03:39:41.694597   61422 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:39:41.694604   61422 out.go:304] Setting ErrFile to fd 2...
I0719 03:39:41.694608   61422 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:39:41.694847   61422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
I0719 03:39:41.695401   61422 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0719 03:39:41.695492   61422 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0719 03:39:41.695858   61422 cli_runner.go:164] Run: docker container inspect functional-966390 --format={{.State.Status}}
I0719 03:39:41.714835   61422 ssh_runner.go:195] Run: systemctl --version
I0719 03:39:41.714877   61422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-966390
I0719 03:39:41.732165   61422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/functional-966390/id_rsa Username:docker}
I0719 03:39:41.810816   61422 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-966390 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-apiserver              | v1.30.3            | sha256:1f6d57 | 32.8MB |
| docker.io/kicbase/echo-server               | functional-966390  | sha256:9056ab | 2.37MB |
| docker.io/library/minikube-local-cache-test | functional-966390  | sha256:a75b6b | 991B   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3            | sha256:76932a | 31.1MB |
| registry.k8s.io/kube-proxy                  | v1.30.3            | sha256:55bb02 | 29MB   |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/nginx                     | latest             | sha256:fffffc | 71MB   |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-scheduler              | v1.30.3            | sha256:3edc18 | 19.3MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20240715-585640e9 | sha256:5cc3ab | 36.8MB |
| docker.io/library/nginx                     | alpine             | sha256:099a2d | 18.4MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-966390 image ls --format table --alsologtostderr:
I0719 03:39:42.115290   61666 out.go:291] Setting OutFile to fd 1 ...
I0719 03:39:42.115548   61666 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:39:42.115558   61666 out.go:304] Setting ErrFile to fd 2...
I0719 03:39:42.115564   61666 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:39:42.115784   61666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
I0719 03:39:42.116367   61666 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0719 03:39:42.116480   61666 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0719 03:39:42.117024   61666 cli_runner.go:164] Run: docker container inspect functional-966390 --format={{.State.Status}}
I0719 03:39:42.138582   61666 ssh_runner.go:195] Run: systemctl --version
I0719 03:39:42.138634   61666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-966390
I0719 03:39:42.155814   61666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/functional-966390/id_rsa Username:docker}
I0719 03:39:42.234418   61666 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-966390 image ls --format json --alsologtostderr:
[{"id":"sha256:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"36775157"},{"id":"sha256:a75b6bf24dd6f9a9d4fc7da93dfb3059b42b317f524e767cce6646e1fab205b9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-966390"],"size":"991"},{"id":"sha256:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":["docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55"],"repoTags":["docker.io/library/nginx:alpine"],"size":"18403459"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:3861c
fcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-966390"],"size":"2372971"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:e6f1816883972d4be47bd488
79a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964c
e4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"32770038"},{"id":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"31139481"},{"id":"sha256:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df"],"repoTags":["docker.io/library/nginx:latest"],"size":"70984068"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:
v1.11.1"],"size":"18182961"},{"id":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"29035454"},{"id":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"19329508"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-966390 image ls --format json --alsologtostderr:
I0719 03:39:41.918572   61539 out.go:291] Setting OutFile to fd 1 ...
I0719 03:39:41.918826   61539 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:39:41.918835   61539 out.go:304] Setting ErrFile to fd 2...
I0719 03:39:41.918840   61539 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:39:41.919009   61539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
I0719 03:39:41.920338   61539 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0719 03:39:41.920579   61539 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0719 03:39:41.921121   61539 cli_runner.go:164] Run: docker container inspect functional-966390 --format={{.State.Status}}
I0719 03:39:41.938205   61539 ssh_runner.go:195] Run: systemctl --version
I0719 03:39:41.938270   61539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-966390
I0719 03:39:41.958739   61539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/functional-966390/id_rsa Username:docker}
I0719 03:39:42.038754   61539 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-966390 image ls --format yaml --alsologtostderr:
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-966390
size: "2372971"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "32770038"
- id: sha256:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "36775157"
- id: sha256:a75b6bf24dd6f9a9d4fc7da93dfb3059b42b317f524e767cce6646e1fab205b9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-966390
size: "991"
- id: sha256:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests:
- docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55
repoTags:
- docker.io/library/nginx:alpine
size: "18403459"
- id: sha256:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
repoTags:
- docker.io/library/nginx:latest
size: "70984068"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "29035454"
- id: sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "19329508"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "31139481"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-966390 image ls --format yaml --alsologtostderr:
I0719 03:39:41.715940   61441 out.go:291] Setting OutFile to fd 1 ...
I0719 03:39:41.716030   61441 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:39:41.716038   61441 out.go:304] Setting ErrFile to fd 2...
I0719 03:39:41.716043   61441 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:39:41.716233   61441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
I0719 03:39:41.716859   61441 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0719 03:39:41.716970   61441 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0719 03:39:41.717322   61441 cli_runner.go:164] Run: docker container inspect functional-966390 --format={{.State.Status}}
I0719 03:39:41.734323   61441 ssh_runner.go:195] Run: systemctl --version
I0719 03:39:41.734368   61441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-966390
I0719 03:39:41.750288   61441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/functional-966390/id_rsa Username:docker}
I0719 03:39:41.834452   61441 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-966390 ssh pgrep buildkitd: exit status 1 (242.249593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image build -t localhost/my-image:functional-966390 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-966390 image build -t localhost/my-image:functional-966390 testdata/build --alsologtostderr: (3.925833743s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-966390 image build -t localhost/my-image:functional-966390 testdata/build --alsologtostderr:
I0719 03:39:42.137374   61677 out.go:291] Setting OutFile to fd 1 ...
I0719 03:39:42.137695   61677 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:39:42.137707   61677 out.go:304] Setting ErrFile to fd 2...
I0719 03:39:42.137714   61677 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:39:42.137992   61677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
I0719 03:39:42.138547   61677 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0719 03:39:42.139146   61677 config.go:182] Loaded profile config "functional-966390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0719 03:39:42.139566   61677 cli_runner.go:164] Run: docker container inspect functional-966390 --format={{.State.Status}}
I0719 03:39:42.156788   61677 ssh_runner.go:195] Run: systemctl --version
I0719 03:39:42.156831   61677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-966390
I0719 03:39:42.173643   61677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/functional-966390/id_rsa Username:docker}
I0719 03:39:42.254442   61677 build_images.go:161] Building image from path: /tmp/build.2777954072.tar
I0719 03:39:42.254493   61677 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0719 03:39:42.262797   61677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2777954072.tar
I0719 03:39:42.266214   61677 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2777954072.tar: stat -c "%s %y" /var/lib/minikube/build/build.2777954072.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2777954072.tar': No such file or directory
I0719 03:39:42.266245   61677 ssh_runner.go:362] scp /tmp/build.2777954072.tar --> /var/lib/minikube/build/build.2777954072.tar (3072 bytes)
I0719 03:39:42.288920   61677 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2777954072
I0719 03:39:42.296452   61677 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2777954072 -xf /var/lib/minikube/build/build.2777954072.tar
I0719 03:39:42.304619   61677 containerd.go:394] Building image: /var/lib/minikube/build/build.2777954072
I0719 03:39:42.304684   61677 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2777954072 --local dockerfile=/var/lib/minikube/build/build.2777954072 --output type=image,name=localhost/my-image:functional-966390
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.8s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:73f00d0feda1fe8f8055b9896fcdb29ec83b35092f22fd2efda431282c1c0879 done
#8 exporting config sha256:298fa0d61a5643e0e7510416db084cfd977e233a92973f3b2c6aee73bab03641 done
#8 naming to localhost/my-image:functional-966390 done
#8 DONE 0.1s
I0719 03:39:45.987874   61677 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2777954072 --local dockerfile=/var/lib/minikube/build/build.2777954072 --output type=image,name=localhost/my-image:functional-966390: (3.683160457s)
I0719 03:39:45.987961   61677 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2777954072
I0719 03:39:46.000077   61677 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2777954072.tar
I0719 03:39:46.008013   61677 build_images.go:217] Built localhost/my-image:functional-966390 from /tmp/build.2777954072.tar
I0719 03:39:46.008041   61677 build_images.go:133] succeeded building to: functional-966390
I0719 03:39:46.008045   61677 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.950038233s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-966390
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-966390 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.93.191 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-966390 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image load --daemon docker.io/kicbase/echo-server:functional-966390 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-966390 image load --daemon docker.io/kicbase/echo-server:functional-966390 --alsologtostderr: (1.495476809s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image load --daemon docker.io/kicbase/echo-server:functional-966390 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-966390 image load --daemon docker.io/kicbase/echo-server:functional-966390 --alsologtostderr: (1.013077717s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:234: (dbg) Done: docker pull docker.io/kicbase/echo-server:latest: (1.201492699s)
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-966390
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image load --daemon docker.io/kicbase/echo-server:functional-966390 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-966390 image load --daemon docker.io/kicbase/echo-server:functional-966390 --alsologtostderr: (1.121613424s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image save docker.io/kicbase/echo-server:functional-966390 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image rm docker.io/kicbase/echo-server:functional-966390 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-966390
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-966390 image save --daemon docker.io/kicbase/echo-server:functional-966390 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-966390
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-966390
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-966390
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-966390
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-023732 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0719 03:40:12.159522   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:41:34.080542   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-023732 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m40.534448157s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (101.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-023732 -- rollout status deployment/busybox: (30.323029599s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-6wmcp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-lfhtz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-ncl6s -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-6wmcp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-lfhtz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-ncl6s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-6wmcp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-lfhtz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-ncl6s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-6wmcp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-6wmcp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-lfhtz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-lfhtz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-ncl6s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-023732 -- exec busybox-fc5497c4f-ncl6s -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-023732 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-023732 -v=7 --alsologtostderr: (20.523809105s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-023732 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp testdata/cp-test.txt ha-023732:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2720545689/001/cp-test_ha-023732.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732:/home/docker/cp-test.txt ha-023732-m02:/home/docker/cp-test_ha-023732_ha-023732-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m02 "sudo cat /home/docker/cp-test_ha-023732_ha-023732-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732:/home/docker/cp-test.txt ha-023732-m03:/home/docker/cp-test_ha-023732_ha-023732-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m03 "sudo cat /home/docker/cp-test_ha-023732_ha-023732-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732:/home/docker/cp-test.txt ha-023732-m04:/home/docker/cp-test_ha-023732_ha-023732-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m04 "sudo cat /home/docker/cp-test_ha-023732_ha-023732-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp testdata/cp-test.txt ha-023732-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2720545689/001/cp-test_ha-023732-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m02:/home/docker/cp-test.txt ha-023732:/home/docker/cp-test_ha-023732-m02_ha-023732.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732 "sudo cat /home/docker/cp-test_ha-023732-m02_ha-023732.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m02:/home/docker/cp-test.txt ha-023732-m03:/home/docker/cp-test_ha-023732-m02_ha-023732-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m03 "sudo cat /home/docker/cp-test_ha-023732-m02_ha-023732-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m02:/home/docker/cp-test.txt ha-023732-m04:/home/docker/cp-test_ha-023732-m02_ha-023732-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m04 "sudo cat /home/docker/cp-test_ha-023732-m02_ha-023732-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp testdata/cp-test.txt ha-023732-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2720545689/001/cp-test_ha-023732-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m03:/home/docker/cp-test.txt ha-023732:/home/docker/cp-test_ha-023732-m03_ha-023732.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732 "sudo cat /home/docker/cp-test_ha-023732-m03_ha-023732.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m03:/home/docker/cp-test.txt ha-023732-m02:/home/docker/cp-test_ha-023732-m03_ha-023732-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m02 "sudo cat /home/docker/cp-test_ha-023732-m03_ha-023732-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m03:/home/docker/cp-test.txt ha-023732-m04:/home/docker/cp-test_ha-023732-m03_ha-023732-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m04 "sudo cat /home/docker/cp-test_ha-023732-m03_ha-023732-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp testdata/cp-test.txt ha-023732-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2720545689/001/cp-test_ha-023732-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m04:/home/docker/cp-test.txt ha-023732:/home/docker/cp-test_ha-023732-m04_ha-023732.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732 "sudo cat /home/docker/cp-test_ha-023732-m04_ha-023732.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m04:/home/docker/cp-test.txt ha-023732-m02:/home/docker/cp-test_ha-023732-m04_ha-023732-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m02 "sudo cat /home/docker/cp-test_ha-023732-m04_ha-023732-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 cp ha-023732-m04:/home/docker/cp-test.txt ha-023732-m03:/home/docker/cp-test_ha-023732-m04_ha-023732-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 ssh -n ha-023732-m03 "sudo cat /home/docker/cp-test_ha-023732-m04_ha-023732-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-023732 node stop m02 -v=7 --alsologtostderr: (11.807657567s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-023732 status -v=7 --alsologtostderr: exit status 7 (621.194976ms)

                                                
                                                
-- stdout --
	ha-023732
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-023732-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023732-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-023732-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 03:43:05.668459   83641 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:43:05.668746   83641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:43:05.668755   83641 out.go:304] Setting ErrFile to fd 2...
	I0719 03:43:05.668762   83641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:43:05.668983   83641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	I0719 03:43:05.669178   83641 out.go:298] Setting JSON to false
	I0719 03:43:05.669213   83641 mustload.go:65] Loading cluster: ha-023732
	I0719 03:43:05.669270   83641 notify.go:220] Checking for updates...
	I0719 03:43:05.669603   83641 config.go:182] Loaded profile config "ha-023732": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 03:43:05.669621   83641 status.go:255] checking status of ha-023732 ...
	I0719 03:43:05.670019   83641 cli_runner.go:164] Run: docker container inspect ha-023732 --format={{.State.Status}}
	I0719 03:43:05.687743   83641 status.go:330] ha-023732 host status = "Running" (err=<nil>)
	I0719 03:43:05.687766   83641 host.go:66] Checking if "ha-023732" exists ...
	I0719 03:43:05.688047   83641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-023732
	I0719 03:43:05.705302   83641 host.go:66] Checking if "ha-023732" exists ...
	I0719 03:43:05.705551   83641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 03:43:05.705586   83641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-023732
	I0719 03:43:05.723722   83641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/ha-023732/id_rsa Username:docker}
	I0719 03:43:05.807779   83641 ssh_runner.go:195] Run: systemctl --version
	I0719 03:43:05.811922   83641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 03:43:05.822412   83641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:43:05.870493   83641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:72 SystemTime:2024-07-19 03:43:05.861219268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:43:05.871077   83641 kubeconfig.go:125] found "ha-023732" server: "https://192.168.49.254:8443"
	I0719 03:43:05.871100   83641 api_server.go:166] Checking apiserver status ...
	I0719 03:43:05.871131   83641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 03:43:05.881636   83641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1632/cgroup
	I0719 03:43:05.890109   83641 api_server.go:182] apiserver freezer: "3:freezer:/docker/2534087a73122b446eba4b16002d0aeabde29b81a839b07dc398b734fd6c60b5/kubepods/burstable/pod3f6e19abd42a0e19613cb35a023d30fb/1e5e02443b9898020e5ad68f7a5772a4f054be33c91be0c5242bab9b73eccfa7"
	I0719 03:43:05.890171   83641 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2534087a73122b446eba4b16002d0aeabde29b81a839b07dc398b734fd6c60b5/kubepods/burstable/pod3f6e19abd42a0e19613cb35a023d30fb/1e5e02443b9898020e5ad68f7a5772a4f054be33c91be0c5242bab9b73eccfa7/freezer.state
	I0719 03:43:05.898545   83641 api_server.go:204] freezer state: "THAWED"
	I0719 03:43:05.898599   83641 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0719 03:43:05.902501   83641 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0719 03:43:05.902534   83641 status.go:422] ha-023732 apiserver status = Running (err=<nil>)
	I0719 03:43:05.902546   83641 status.go:257] ha-023732 status: &{Name:ha-023732 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 03:43:05.902569   83641 status.go:255] checking status of ha-023732-m02 ...
	I0719 03:43:05.902885   83641 cli_runner.go:164] Run: docker container inspect ha-023732-m02 --format={{.State.Status}}
	I0719 03:43:05.920080   83641 status.go:330] ha-023732-m02 host status = "Stopped" (err=<nil>)
	I0719 03:43:05.920105   83641 status.go:343] host is not running, skipping remaining checks
	I0719 03:43:05.920113   83641 status.go:257] ha-023732-m02 status: &{Name:ha-023732-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 03:43:05.920136   83641 status.go:255] checking status of ha-023732-m03 ...
	I0719 03:43:05.920394   83641 cli_runner.go:164] Run: docker container inspect ha-023732-m03 --format={{.State.Status}}
	I0719 03:43:05.938145   83641 status.go:330] ha-023732-m03 host status = "Running" (err=<nil>)
	I0719 03:43:05.938174   83641 host.go:66] Checking if "ha-023732-m03" exists ...
	I0719 03:43:05.938429   83641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-023732-m03
	I0719 03:43:05.955502   83641 host.go:66] Checking if "ha-023732-m03" exists ...
	I0719 03:43:05.955779   83641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 03:43:05.955824   83641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-023732-m03
	I0719 03:43:05.975429   83641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/ha-023732-m03/id_rsa Username:docker}
	I0719 03:43:06.060207   83641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 03:43:06.070952   83641 kubeconfig.go:125] found "ha-023732" server: "https://192.168.49.254:8443"
	I0719 03:43:06.070980   83641 api_server.go:166] Checking apiserver status ...
	I0719 03:43:06.071011   83641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 03:43:06.080687   83641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	I0719 03:43:06.088974   83641 api_server.go:182] apiserver freezer: "3:freezer:/docker/fb1c4b8376f48ca6ecb6c4cc7529c02c21607e43722ab0538184d9c3b3e86bed/kubepods/burstable/pod542b3445cfc00854d6f34b2147739959/bf1fdac5856c2da8888c982ad821939ef45809c0b9a3005425b9fbaae6db2b23"
	I0719 03:43:06.089044   83641 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fb1c4b8376f48ca6ecb6c4cc7529c02c21607e43722ab0538184d9c3b3e86bed/kubepods/burstable/pod542b3445cfc00854d6f34b2147739959/bf1fdac5856c2da8888c982ad821939ef45809c0b9a3005425b9fbaae6db2b23/freezer.state
	I0719 03:43:06.096446   83641 api_server.go:204] freezer state: "THAWED"
	I0719 03:43:06.096472   83641 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0719 03:43:06.099953   83641 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0719 03:43:06.099977   83641 status.go:422] ha-023732-m03 apiserver status = Running (err=<nil>)
	I0719 03:43:06.099988   83641 status.go:257] ha-023732-m03 status: &{Name:ha-023732-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 03:43:06.100007   83641 status.go:255] checking status of ha-023732-m04 ...
	I0719 03:43:06.100233   83641 cli_runner.go:164] Run: docker container inspect ha-023732-m04 --format={{.State.Status}}
	I0719 03:43:06.117789   83641 status.go:330] ha-023732-m04 host status = "Running" (err=<nil>)
	I0719 03:43:06.117817   83641 host.go:66] Checking if "ha-023732-m04" exists ...
	I0719 03:43:06.118063   83641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-023732-m04
	I0719 03:43:06.134276   83641 host.go:66] Checking if "ha-023732-m04" exists ...
	I0719 03:43:06.134535   83641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 03:43:06.134591   83641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-023732-m04
	I0719 03:43:06.153026   83641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/ha-023732-m04/id_rsa Username:docker}
	I0719 03:43:06.235412   83641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 03:43:06.245996   83641 status.go:257] ha-023732-m04 status: &{Name:ha-023732-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (15.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-023732 node start m02 -v=7 --alsologtostderr: (14.616487568s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (15.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-023732 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-023732 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-023732 -v=7 --alsologtostderr: (25.836279982s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-023732 --wait=true -v=7 --alsologtostderr
E0719 03:43:50.236420   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:44:16.764938   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:16.770242   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:16.780488   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:16.800763   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:16.841038   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:16.921345   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:17.082261   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:17.402812   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:17.921462   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 03:44:18.043990   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:19.324640   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:21.885441   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:27.005717   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:37.246215   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 03:44:57.727023   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-023732 --wait=true -v=7 --alsologtostderr: (1m20.167402941s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-023732
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-023732 node delete m03 -v=7 --alsologtostderr: (9.051341013s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 stop -v=7 --alsologtostderr
E0719 03:45:38.687725   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-023732 stop -v=7 --alsologtostderr: (35.413292847s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-023732 status -v=7 --alsologtostderr: exit status 7 (95.540183ms)

                                                
                                                
-- stdout --
	ha-023732
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023732-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023732-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 03:45:54.526807  100106 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:45:54.527070  100106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:45:54.527080  100106 out.go:304] Setting ErrFile to fd 2...
	I0719 03:45:54.527087  100106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:45:54.527266  100106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	I0719 03:45:54.527444  100106 out.go:298] Setting JSON to false
	I0719 03:45:54.527479  100106 mustload.go:65] Loading cluster: ha-023732
	I0719 03:45:54.527520  100106 notify.go:220] Checking for updates...
	I0719 03:45:54.527862  100106 config.go:182] Loaded profile config "ha-023732": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 03:45:54.527882  100106 status.go:255] checking status of ha-023732 ...
	I0719 03:45:54.528344  100106 cli_runner.go:164] Run: docker container inspect ha-023732 --format={{.State.Status}}
	I0719 03:45:54.546929  100106 status.go:330] ha-023732 host status = "Stopped" (err=<nil>)
	I0719 03:45:54.546949  100106 status.go:343] host is not running, skipping remaining checks
	I0719 03:45:54.546957  100106 status.go:257] ha-023732 status: &{Name:ha-023732 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 03:45:54.546994  100106 status.go:255] checking status of ha-023732-m02 ...
	I0719 03:45:54.547229  100106 cli_runner.go:164] Run: docker container inspect ha-023732-m02 --format={{.State.Status}}
	I0719 03:45:54.563430  100106 status.go:330] ha-023732-m02 host status = "Stopped" (err=<nil>)
	I0719 03:45:54.563451  100106 status.go:343] host is not running, skipping remaining checks
	I0719 03:45:54.563459  100106 status.go:257] ha-023732-m02 status: &{Name:ha-023732-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 03:45:54.563483  100106 status.go:255] checking status of ha-023732-m04 ...
	I0719 03:45:54.563728  100106 cli_runner.go:164] Run: docker container inspect ha-023732-m04 --format={{.State.Status}}
	I0719 03:45:54.580827  100106 status.go:330] ha-023732-m04 host status = "Stopped" (err=<nil>)
	I0719 03:45:54.580876  100106 status.go:343] host is not running, skipping remaining checks
	I0719 03:45:54.580890  100106 status.go:257] ha-023732-m04 status: &{Name:ha-023732-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (74.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-023732 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0719 03:47:00.608033   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-023732 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m14.103771605s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (74.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-023732 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-023732 --control-plane -v=7 --alsologtostderr: (34.784258315s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-023732 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-822407 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-822407 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (52.175325626s)
--- PASS: TestJSONOutput/start/Command (52.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-822407 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-822407 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-822407 --output=json --user=testUser
E0719 03:48:50.236224   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-822407 --output=json --user=testUser: (5.698170296s)
--- PASS: TestJSONOutput/stop/Command (5.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-160134 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-160134 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.36625ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5e146d33-fefd-4f48-9bf3-58c18d2194fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-160134] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7386413-1a1a-4f46-9c8e-7c8101a55b85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"9e7b668b-8692-471f-be3f-17be114cbc1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"955740c7-8b58-40e7-8c0d-66d2c36f3ce1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig"}}
	{"specversion":"1.0","id":"1adc33f3-d68c-41be-aa83-b5042adb6129","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube"}}
	{"specversion":"1.0","id":"a74c25bb-a58a-4610-8537-b57d88a9180f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f6e8182b-f1c7-4205-8af0-1c48890121d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"97e56482-bcb2-4a8c-89cd-ec0225379ffa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-160134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-160134
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-570379 --network=
E0719 03:49:16.764317   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-570379 --network=: (33.167825661s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-570379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-570379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-570379: (1.975930748s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.16s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-347295 --network=bridge
E0719 03:49:44.449271   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-347295 --network=bridge: (20.462153145s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-347295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-347295
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-347295: (1.849911742s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.33s)

                                                
                                    
x
+
TestKicExistingNetwork (22.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-923114 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-923114 --network=existing-network: (20.337590229s)
helpers_test.go:175: Cleaning up "existing-network-923114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-923114
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-923114: (1.840968558s)
--- PASS: TestKicExistingNetwork (22.31s)

                                                
                                    
x
+
TestKicCustomSubnet (23s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-757630 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-757630 --subnet=192.168.60.0/24: (21.336884447s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-757630 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-757630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-757630
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-757630: (1.64282392s)
--- PASS: TestKicCustomSubnet (23.00s)

                                                
                                    
x
+
TestKicStaticIP (25.6s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-825385 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-825385 --static-ip=192.168.200.200: (23.426181741s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-825385 ip
helpers_test.go:175: Cleaning up "static-ip-825385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-825385
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-825385: (2.047000996s)
--- PASS: TestKicStaticIP (25.60s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (43.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-367727 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-367727 --driver=docker  --container-runtime=containerd: (19.586769815s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-379232 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-379232 --driver=docker  --container-runtime=containerd: (19.786218221s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-367727
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-379232
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-379232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-379232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-379232: (1.797785704s)
helpers_test.go:175: Cleaning up "first-367727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-367727
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-367727: (1.822771795s)
--- PASS: TestMinikubeProfile (43.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-645507 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-645507 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.558416373s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-645507 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-656396 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-656396 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.134121786s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-656396 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.54s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-645507 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-645507 --alsologtostderr -v=5: (1.541559554s)
--- PASS: TestMountStart/serial/DeleteFirst (1.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-656396 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-656396
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-656396: (1.166446224s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-656396
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-656396: (5.976050663s)
--- PASS: TestMountStart/serial/RestartStopped (6.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-656396 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-262899 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-262899 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.473257425s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-262899 -- rollout status deployment/busybox: (16.559643458s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- exec busybox-fc5497c4f-hvhps -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- exec busybox-fc5497c4f-wq62f -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- exec busybox-fc5497c4f-hvhps -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- exec busybox-fc5497c4f-wq62f -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- exec busybox-fc5497c4f-hvhps -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- exec busybox-fc5497c4f-wq62f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- exec busybox-fc5497c4f-hvhps -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- exec busybox-fc5497c4f-hvhps -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- exec busybox-fc5497c4f-wq62f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-262899 -- exec busybox-fc5497c4f-wq62f -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-262899 -v 3 --alsologtostderr
E0719 03:53:50.236856   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-262899 -v 3 --alsologtostderr: (16.708629788s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-262899 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp testdata/cp-test.txt multinode-262899:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp multinode-262899:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2358499905/001/cp-test_multinode-262899.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp multinode-262899:/home/docker/cp-test.txt multinode-262899-m02:/home/docker/cp-test_multinode-262899_multinode-262899-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m02 "sudo cat /home/docker/cp-test_multinode-262899_multinode-262899-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp multinode-262899:/home/docker/cp-test.txt multinode-262899-m03:/home/docker/cp-test_multinode-262899_multinode-262899-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m03 "sudo cat /home/docker/cp-test_multinode-262899_multinode-262899-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp testdata/cp-test.txt multinode-262899-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp multinode-262899-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2358499905/001/cp-test_multinode-262899-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp multinode-262899-m02:/home/docker/cp-test.txt multinode-262899:/home/docker/cp-test_multinode-262899-m02_multinode-262899.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899 "sudo cat /home/docker/cp-test_multinode-262899-m02_multinode-262899.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp multinode-262899-m02:/home/docker/cp-test.txt multinode-262899-m03:/home/docker/cp-test_multinode-262899-m02_multinode-262899-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m03 "sudo cat /home/docker/cp-test_multinode-262899-m02_multinode-262899-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp testdata/cp-test.txt multinode-262899-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp multinode-262899-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2358499905/001/cp-test_multinode-262899-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp multinode-262899-m03:/home/docker/cp-test.txt multinode-262899:/home/docker/cp-test_multinode-262899-m03_multinode-262899.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899 "sudo cat /home/docker/cp-test_multinode-262899-m03_multinode-262899.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 cp multinode-262899-m03:/home/docker/cp-test.txt multinode-262899-m02:/home/docker/cp-test_multinode-262899-m03_multinode-262899-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 ssh -n multinode-262899-m02 "sudo cat /home/docker/cp-test_multinode-262899-m03_multinode-262899-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-262899 node stop m03: (1.166732348s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-262899 status: exit status 7 (433.132599ms)

                                                
                                                
-- stdout --
	multinode-262899
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-262899-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-262899-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-262899 status --alsologtostderr: exit status 7 (437.621569ms)

                                                
                                                
-- stdout --
	multinode-262899
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-262899-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-262899-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 03:54:02.589527  165709 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:54:02.590105  165709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:54:02.590124  165709 out.go:304] Setting ErrFile to fd 2...
	I0719 03:54:02.590130  165709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:54:02.590560  165709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	I0719 03:54:02.590963  165709 out.go:298] Setting JSON to false
	I0719 03:54:02.591058  165709 notify.go:220] Checking for updates...
	I0719 03:54:02.591086  165709 mustload.go:65] Loading cluster: multinode-262899
	I0719 03:54:02.591646  165709 config.go:182] Loaded profile config "multinode-262899": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 03:54:02.591675  165709 status.go:255] checking status of multinode-262899 ...
	I0719 03:54:02.592137  165709 cli_runner.go:164] Run: docker container inspect multinode-262899 --format={{.State.Status}}
	I0719 03:54:02.610486  165709 status.go:330] multinode-262899 host status = "Running" (err=<nil>)
	I0719 03:54:02.610514  165709 host.go:66] Checking if "multinode-262899" exists ...
	I0719 03:54:02.610819  165709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-262899
	I0719 03:54:02.628131  165709 host.go:66] Checking if "multinode-262899" exists ...
	I0719 03:54:02.628399  165709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 03:54:02.628452  165709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-262899
	I0719 03:54:02.645315  165709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/multinode-262899/id_rsa Username:docker}
	I0719 03:54:02.727634  165709 ssh_runner.go:195] Run: systemctl --version
	I0719 03:54:02.731388  165709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 03:54:02.741663  165709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:54:02.793061  165709 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-07-19 03:54:02.783182486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 03:54:02.793638  165709 kubeconfig.go:125] found "multinode-262899" server: "https://192.168.67.2:8443"
	I0719 03:54:02.793662  165709 api_server.go:166] Checking apiserver status ...
	I0719 03:54:02.793699  165709 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 03:54:02.803960  165709 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1615/cgroup
	I0719 03:54:02.812493  165709 api_server.go:182] apiserver freezer: "3:freezer:/docker/a045f405c3370b80ea6c631b0d6ea105905dca1f8041f8dc80265264051e6dcd/kubepods/burstable/podc3cd566e85b7024a75e1a66d382b6566/6e0040a4ff69e8e9645c2e45906fff451fe54b73ba431a598cf6bfc1099f2dc9"
	I0719 03:54:02.812579  165709 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a045f405c3370b80ea6c631b0d6ea105905dca1f8041f8dc80265264051e6dcd/kubepods/burstable/podc3cd566e85b7024a75e1a66d382b6566/6e0040a4ff69e8e9645c2e45906fff451fe54b73ba431a598cf6bfc1099f2dc9/freezer.state
	I0719 03:54:02.820093  165709 api_server.go:204] freezer state: "THAWED"
	I0719 03:54:02.820127  165709 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0719 03:54:02.823672  165709 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0719 03:54:02.823692  165709 status.go:422] multinode-262899 apiserver status = Running (err=<nil>)
	I0719 03:54:02.823701  165709 status.go:257] multinode-262899 status: &{Name:multinode-262899 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 03:54:02.823722  165709 status.go:255] checking status of multinode-262899-m02 ...
	I0719 03:54:02.823965  165709 cli_runner.go:164] Run: docker container inspect multinode-262899-m02 --format={{.State.Status}}
	I0719 03:54:02.840606  165709 status.go:330] multinode-262899-m02 host status = "Running" (err=<nil>)
	I0719 03:54:02.840627  165709 host.go:66] Checking if "multinode-262899-m02" exists ...
	I0719 03:54:02.840875  165709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-262899-m02
	I0719 03:54:02.857141  165709 host.go:66] Checking if "multinode-262899-m02" exists ...
	I0719 03:54:02.857459  165709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 03:54:02.857502  165709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-262899-m02
	I0719 03:54:02.874987  165709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32915 SSHKeyPath:/home/jenkins/minikube-integration/19302-5122/.minikube/machines/multinode-262899-m02/id_rsa Username:docker}
	I0719 03:54:02.955629  165709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 03:54:02.965942  165709 status.go:257] multinode-262899-m02 status: &{Name:multinode-262899-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0719 03:54:02.965978  165709 status.go:255] checking status of multinode-262899-m03 ...
	I0719 03:54:02.966253  165709 cli_runner.go:164] Run: docker container inspect multinode-262899-m03 --format={{.State.Status}}
	I0719 03:54:02.984424  165709 status.go:330] multinode-262899-m03 host status = "Stopped" (err=<nil>)
	I0719 03:54:02.984447  165709 status.go:343] host is not running, skipping remaining checks
	I0719 03:54:02.984455  165709 status.go:257] multinode-262899-m03 status: &{Name:multinode-262899-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-262899 node start m03 -v=7 --alsologtostderr: (7.82678501s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-262899
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-262899
E0719 03:54:16.764242   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-262899: (24.652555422s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-262899 --wait=true -v=8 --alsologtostderr
E0719 03:55:13.282003   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-262899 --wait=true -v=8 --alsologtostderr: (57.641157369s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-262899
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-262899 node delete m03: (4.473464721s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-262899 stop: (23.528015653s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-262899 status: exit status 7 (76.852793ms)

                                                
                                                
-- stdout --
	multinode-262899
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-262899-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-262899 status --alsologtostderr: exit status 7 (78.297226ms)

                                                
                                                
-- stdout --
	multinode-262899
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-262899-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 03:56:02.455984  175504 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:56:02.456246  175504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:56:02.456255  175504 out.go:304] Setting ErrFile to fd 2...
	I0719 03:56:02.456259  175504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:56:02.456453  175504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	I0719 03:56:02.456611  175504 out.go:298] Setting JSON to false
	I0719 03:56:02.456641  175504 mustload.go:65] Loading cluster: multinode-262899
	I0719 03:56:02.456688  175504 notify.go:220] Checking for updates...
	I0719 03:56:02.456997  175504 config.go:182] Loaded profile config "multinode-262899": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 03:56:02.457009  175504 status.go:255] checking status of multinode-262899 ...
	I0719 03:56:02.457375  175504 cli_runner.go:164] Run: docker container inspect multinode-262899 --format={{.State.Status}}
	I0719 03:56:02.475834  175504 status.go:330] multinode-262899 host status = "Stopped" (err=<nil>)
	I0719 03:56:02.475861  175504 status.go:343] host is not running, skipping remaining checks
	I0719 03:56:02.475871  175504 status.go:257] multinode-262899 status: &{Name:multinode-262899 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 03:56:02.475907  175504 status.go:255] checking status of multinode-262899-m02 ...
	I0719 03:56:02.476292  175504 cli_runner.go:164] Run: docker container inspect multinode-262899-m02 --format={{.State.Status}}
	I0719 03:56:02.492727  175504 status.go:330] multinode-262899-m02 host status = "Stopped" (err=<nil>)
	I0719 03:56:02.492765  175504 status.go:343] host is not running, skipping remaining checks
	I0719 03:56:02.492777  175504 status.go:257] multinode-262899-m02 status: &{Name:multinode-262899-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-262899 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-262899 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.677161212s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-262899 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-262899
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-262899-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-262899-m02 --driver=docker  --container-runtime=containerd: exit status 14 (58.884399ms)

                                                
                                                
-- stdout --
	* [multinode-262899-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-262899-m02' is duplicated with machine name 'multinode-262899-m02' in profile 'multinode-262899'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-262899-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-262899-m03 --driver=docker  --container-runtime=containerd: (19.951078448s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-262899
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-262899: exit status 80 (247.399591ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-262899 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-262899-m03 already exists in multinode-262899-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-262899-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-262899-m03: (1.776896199s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.08s)

                                                
                                    
x
+
TestPreload (153.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-008908 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0719 03:58:50.236057   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-008908 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m50.287684421s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-008908 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-008908 image pull gcr.io/k8s-minikube/busybox: (2.389911156s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-008908
E0719 03:59:16.764836   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-008908: (11.904800282s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-008908 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-008908 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (26.253888665s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-008908 image list
helpers_test.go:175: Cleaning up "test-preload-008908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-008908
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-008908: (2.289599639s)
--- PASS: TestPreload (153.39s)

                                                
                                    
x
+
TestScheduledStopUnix (97.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-569975 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-569975 --memory=2048 --driver=docker  --container-runtime=containerd: (20.798055137s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-569975 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-569975 -n scheduled-stop-569975
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-569975 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-569975 --cancel-scheduled
E0719 04:00:39.811781   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-569975 -n scheduled-stop-569975
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-569975
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-569975 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-569975
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-569975: exit status 7 (60.957193ms)

                                                
                                                
-- stdout --
	scheduled-stop-569975
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-569975 -n scheduled-stop-569975
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-569975 -n scheduled-stop-569975: exit status 7 (60.292422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-569975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-569975
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-569975: (5.132373318s)
--- PASS: TestScheduledStopUnix (97.13s)

                                                
                                    
x
+
TestInsufficientStorage (12.31s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-059825 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-059825 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.028588387s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"21b8355b-cffc-4693-a3ca-e1e0532938fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-059825] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f20f839-6495-47b4-bab2-e00f2a298aa7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"926a7510-d526-4583-9751-2b3238a50028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3cf896e2-7b52-42ee-972b-0714f9790ed4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig"}}
	{"specversion":"1.0","id":"fbaeaf32-0300-4377-8f1b-6290283146fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube"}}
	{"specversion":"1.0","id":"faacfdd1-5abd-474b-9287-4b6d43cd9f42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"011a0831-147f-46dc-ac8e-c9bbc806ebd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ed328fd8-7717-45e0-999c-3f440d7daf6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f7369329-261c-498a-9692-4a9290c52d7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6bdaa797-abd9-4e73-a880-c59e9b8be81c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"43f6ae06-8279-4d62-be80-3283a67fcd18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e742ddb1-0157-41bb-92ae-9f397c3acf8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-059825\" primary control-plane node in \"insufficient-storage-059825\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ebc9ab72-dde5-4233-a9ea-cbd03126f33c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721324606-19298 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9389f7f0-df94-44f3-a71f-1e7de858c034","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"116d9332-80ae-4da5-bcc0-d729ff7e6507","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-059825 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-059825 --output=json --layout=cluster: exit status 7 (245.730794ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-059825","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-059825","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 04:01:41.254903  198685 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-059825" does not appear in /home/jenkins/minikube-integration/19302-5122/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-059825 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-059825 --output=json --layout=cluster: exit status 7 (240.50687ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-059825","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-059825","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 04:01:41.496623  198782 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-059825" does not appear in /home/jenkins/minikube-integration/19302-5122/kubeconfig
	E0719 04:01:41.506230  198782 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/insufficient-storage-059825/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-059825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-059825
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-059825: (1.797036498s)
--- PASS: TestInsufficientStorage (12.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (83.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.20426780 start -p running-upgrade-137784 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.20426780 start -p running-upgrade-137784 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (50.517896568s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-137784 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-137784 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.505663125s)
helpers_test.go:175: Cleaning up "running-upgrade-137784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-137784
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-137784: (2.24024641s)
--- PASS: TestRunningBinaryUpgrade (83.88s)

                                                
                                    
x
+
TestKubernetesUpgrade (315.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-655013 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-655013 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.436774677s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-655013
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-655013: (1.192896996s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-655013 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-655013 status --format={{.Host}}: exit status 7 (68.535409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-655013 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-655013 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m25.288748706s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-655013 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-655013 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-655013 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (64.048814ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-655013] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-655013
	    minikube start -p kubernetes-upgrade-655013 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6550132 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-655013 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-655013 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-655013 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4.942850769s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-655013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-655013
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-655013: (2.361992014s)
--- PASS: TestKubernetesUpgrade (315.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (185.24s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2392816485 start -p missing-upgrade-367658 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2392816485 start -p missing-upgrade-367658 --memory=2200 --driver=docker  --container-runtime=containerd: (1m43.46234812s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-367658
E0719 04:03:50.236311   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-367658: (10.306261182s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-367658
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-367658 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-367658 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.046439102s)
helpers_test.go:175: Cleaning up "missing-upgrade-367658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-367658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-367658: (2.7293828s)
--- PASS: TestMissingContainerUpgrade (185.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-013056 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-013056 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (75.097672ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-013056] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-013056 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-013056 --driver=docker  --container-runtime=containerd: (30.880339429s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-013056 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-823214 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-823214 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (175.879005ms)

                                                
                                                
-- stdout --
	* [false-823214] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:01:47.040232  201074 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:01:47.040552  201074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:01:47.040611  201074 out.go:304] Setting ErrFile to fd 2...
	I0719 04:01:47.040627  201074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:01:47.041042  201074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-5122/.minikube/bin
	I0719 04:01:47.041726  201074 out.go:298] Setting JSON to false
	I0719 04:01:47.042892  201074 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2651,"bootTime":1721359056,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 04:01:47.042956  201074 start.go:139] virtualization: kvm guest
	I0719 04:01:47.046059  201074 out.go:177] * [false-823214] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 04:01:47.047631  201074 notify.go:220] Checking for updates...
	I0719 04:01:47.047653  201074 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:01:47.049148  201074 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:01:47.050441  201074 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-5122/kubeconfig
	I0719 04:01:47.052025  201074 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-5122/.minikube
	I0719 04:01:47.053241  201074 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 04:01:47.054555  201074 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:01:47.056580  201074 config.go:182] Loaded profile config "NoKubernetes-013056": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 04:01:47.056716  201074 config.go:182] Loaded profile config "force-systemd-env-039864": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 04:01:47.056846  201074 config.go:182] Loaded profile config "offline-containerd-995424": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0719 04:01:47.056995  201074 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:01:47.087148  201074 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 04:01:47.087252  201074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:01:47.157254  201074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:93 SystemTime:2024-07-19 04:01:47.14505668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0719 04:01:47.157388  201074 docker.go:307] overlay module found
	I0719 04:01:47.160368  201074 out.go:177] * Using the docker driver based on user configuration
	I0719 04:01:47.162294  201074 start.go:297] selected driver: docker
	I0719 04:01:47.162315  201074 start.go:901] validating driver "docker" against <nil>
	I0719 04:01:47.162332  201074 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:01:47.164883  201074 out.go:177] 
	W0719 04:01:47.166146  201074 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0719 04:01:47.167361  201074 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-823214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-823214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-823214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-823214"

                                                
                                                
----------------------- debugLogs end: false-823214 [took: 7.165134178s] --------------------------------
helpers_test.go:175: Cleaning up "false-823214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-823214
--- PASS: TestNetworkPlugins/group/false (7.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-013056 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-013056 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.050528755s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-013056 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-013056 status -o json: exit status 2 (277.785323ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-013056","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-013056
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-013056: (6.452213029s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-013056 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-013056 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.979359247s)
--- PASS: TestNoKubernetes/serial/Start (5.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-013056 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-013056 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.742772ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (9.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.506469104s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (6.628070813s)
--- PASS: TestNoKubernetes/serial/ProfileList (9.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.637107008 start -p stopped-upgrade-746858 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.637107008 start -p stopped-upgrade-746858 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m9.874722271s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.637107008 -p stopped-upgrade-746858 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.637107008 -p stopped-upgrade-746858 stop: (1.218322726s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-746858 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0719 04:04:16.764139   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-746858 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (48.772510212s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-013056
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-013056: (1.179729364s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-013056 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-013056 --driver=docker  --container-runtime=containerd: (6.358292527s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-013056 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-013056 "sudo systemctl is-active --quiet service kubelet": exit status 1 (246.263932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestPause/serial/Start (53.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-303666 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-303666 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (53.992461937s)
--- PASS: TestPause/serial/Start (53.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-746858
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-303666 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-303666 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.528273529s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.54s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-303666 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-303666 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-303666 --output=json --layout=cluster: exit status 2 (298.331129ms)

                                                
                                                
-- stdout --
	{"Name":"pause-303666","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-303666","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-303666 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.77s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-303666 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.77s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.93s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-303666 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-303666 --alsologtostderr -v=5: (2.930112375s)
--- PASS: TestPause/serial/DeletePaused (2.93s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-303666
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-303666: exit status 1 (17.544079ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-303666: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (53.896266372s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (52.153083876s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-823214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-823214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4rxkp" [547b5e67-a353-41b1-a43a-8d79c50a68b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4rxkp" [547b5e67-a353-41b1-a43a-8d79c50a68b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003512766s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-823214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7z9p5" [3ee4ce14-52a0-4b40-a625-45311d9d22ef] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003848513s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-823214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-823214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-h2wgn" [d0f85790-f1e4-41c2-a95d-9bd9b4b80cff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-h2wgn" [d0f85790-f1e4-41c2-a95d-9bd9b4b80cff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004162838s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (59.994924967s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-823214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (51.366684162s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (40.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (40.126505389s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (40.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8b5tz" [3dd50a81-348f-47a2-8f6f-e76d82ce51cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005334574s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-823214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-823214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9n4gc" [27c9fe65-66b3-4092-93be-0d628bf1bb8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9n4gc" [27c9fe65-66b3-4092-93be-0d628bf1bb8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.002957893s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-823214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-823214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-823214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qzrbp" [823265af-229b-4a41-9c18-4ba0e6666e13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qzrbp" [823265af-229b-4a41-9c18-4ba0e6666e13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004018226s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-823214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-823214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gsk95" [e6ffaa61-b212-4863-bc8c-8bfa9f26d4d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gsk95" [e6ffaa61-b212-4863-bc8c-8bfa9f26d4d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003290525s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-823214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-823214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.695974317s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-823214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (40.219587351s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (134.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-846763 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-846763 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m14.671798229s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (134.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-660685 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0719 04:08:50.236100   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-660685 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (1m15.716275863s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-823214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-823214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5jqpd" [e462d3ed-d9d3-4bc1-ad2e-2b9d9ffebf10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5jqpd" [e462d3ed-d9d3-4bc1-ad2e-2b9d9ffebf10] Running
E0719 04:09:16.764062   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004488464s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-823214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gzzrz" [0e9be78f-4ee3-4289-96ef-e73f0d06f4e3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004807482s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-823214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-823214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b2wd9" [c9cd11df-5e59-46fd-8f0a-74311fef976f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-b2wd9" [c9cd11df-5e59-46fd-8f0a-74311fef976f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.00411022s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-823214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-823214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)
E0719 04:13:50.236063   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 04:13:51.470591   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:56.515292   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:14:09.843303   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:14:10.438824   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:10.444125   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:10.454405   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:10.474716   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:10.515094   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:10.595424   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:10.755836   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:11.076177   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:11.717135   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:12.997845   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:14.499749   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:14:15.558609   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:16.764241   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/functional-966390/client.crt: no such file or directory
E0719 04:14:20.679844   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:23.767858   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:23.773133   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:23.783407   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:23.803724   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:23.843983   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:23.924295   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:24.084737   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:24.405373   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:25.046214   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:26.327170   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:28.887487   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:29.405069   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:14:30.920915   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
E0719 04:14:32.430774   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:14:34.008175   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory
E0719 04:14:37.476156   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:14:44.249322   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/flannel-823214/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-863542 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-863542 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (54.342848005s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-519188 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-519188 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m6.541745525s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-660685 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0166aef2-1234-48a0-b6d4-77ac25e8a774] Pending
helpers_test.go:344: "busybox" [0166aef2-1234-48a0-b6d4-77ac25e8a774] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0166aef2-1234-48a0-b6d4-77ac25e8a774] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00402749s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-660685 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-660685 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-660685 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-660685 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-660685 --alsologtostderr -v=3: (12.045238094s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-660685 -n no-preload-660685
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-660685 -n no-preload-660685: exit status 7 (71.499131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-660685 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-660685 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-660685 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (4m22.402963319s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-660685 -n no-preload-660685
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-863542 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f040e55f-89d9-4d36-97e7-1e7dbac7852d] Pending
helpers_test.go:344: "busybox" [f040e55f-89d9-4d36-97e7-1e7dbac7852d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f040e55f-89d9-4d36-97e7-1e7dbac7852d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003909732s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-863542 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-863542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-863542 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-863542 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-863542 --alsologtostderr -v=3: (12.250154131s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-863542 -n embed-certs-863542
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-863542 -n embed-certs-863542: exit status 7 (62.596374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-863542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-863542 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-863542 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m22.427390449s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-863542 -n embed-certs-863542
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-846763 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3e18ea4d-0536-43a2-aaf0-5c9801f3b922] Pending
helpers_test.go:344: "busybox" [3e18ea4d-0536-43a2-aaf0-5c9801f3b922] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3e18ea4d-0536-43a2-aaf0-5c9801f3b922] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004219774s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-846763 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-519188 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9f5debad-e3f8-4af9-896e-0f8d38b5d95d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9f5debad-e3f8-4af9-896e-0f8d38b5d95d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003209608s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-519188 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-846763 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-846763 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-846763 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-846763 --alsologtostderr -v=3: (12.210167809s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-519188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-519188 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-519188 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-519188 --alsologtostderr -v=3: (16.048435074s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-846763 -n old-k8s-version-846763
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-846763 -n old-k8s-version-846763: exit status 7 (100.610117ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-846763 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (57.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-846763 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0719 04:11:26.000396   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:26.005671   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:26.015995   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:26.036247   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:26.076499   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:26.156830   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:26.317238   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:26.638062   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:27.278439   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:28.558777   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-846763 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (57.490525073s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-846763 -n old-k8s-version-846763
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (57.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519188 -n default-k8s-diff-port-519188
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519188 -n default-k8s-diff-port-519188: exit status 7 (87.309452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-519188 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-519188 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
E0719 04:11:31.119943   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:36.240910   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:45.563100   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:45.568382   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:45.579015   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:45.599300   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:45.639570   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:45.719814   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:45.880189   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:46.200377   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:46.481184   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
E0719 04:11:46.841126   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:48.121903   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:50.682132   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:11:53.282213   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/addons-636193/client.crt: no such file or directory
E0719 04:11:55.802290   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:12:06.043284   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:12:06.961688   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/auto-823214/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-519188 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m22.00873201s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519188 -n default-k8s-diff-port-519188
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (23.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0719 04:12:26.524153   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pglw6" [ca2d97e9-aa01-4347-92f6-14dcf796c63a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pglw6" [ca2d97e9-aa01-4347-92f6-14dcf796c63a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 23.00358137s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (23.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pglw6" [ca2d97e9-aa01-4347-92f6-14dcf796c63a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004617091s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-846763 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-846763 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-846763 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-846763 -n old-k8s-version-846763
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-846763 -n old-k8s-version-846763: exit status 2 (269.390719ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-846763 -n old-k8s-version-846763
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-846763 -n old-k8s-version-846763: exit status 2 (274.585201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-846763 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-846763 -n old-k8s-version-846763
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-846763 -n old-k8s-version-846763
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-269945 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0719 04:12:52.576820   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:12:52.582164   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:12:52.592457   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:12:52.612766   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:12:52.653114   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:12:52.733428   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:12:52.893786   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:12:53.214483   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:12:53.855089   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:12:55.135527   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:12:57.696627   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:13:02.817909   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:13:07.484599   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/kindnet-823214/client.crt: no such file or directory
E0719 04:13:10.510088   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:10.515333   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:10.525620   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:10.546005   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:10.586318   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:10.666443   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:10.826563   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:11.146913   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:11.787219   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:13.058681   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
E0719 04:13:13.067936   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:15.553156   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:13:15.558413   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:13:15.568923   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:13:15.589515   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:13:15.628731   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:15.629785   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:13:15.709908   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:13:15.870435   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:13:16.191066   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:13:16.831601   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-269945 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (26.604762168s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-269945 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-269945 --alsologtostderr -v=3
E0719 04:13:18.112610   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-269945 --alsologtostderr -v=3: (1.18259788s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-269945 -n newest-cni-269945
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-269945 -n newest-cni-269945: exit status 7 (62.140354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-269945 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-269945 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0719 04:13:20.673337   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:13:20.749554   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:13:25.794201   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:13:30.990023   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-269945 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (12.516794571s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-269945 -n newest-cni-269945
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-269945 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-269945 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-269945 -n newest-cni-269945
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-269945 -n newest-cni-269945: exit status 2 (279.168136ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-269945 -n newest-cni-269945
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-269945 -n newest-cni-269945: exit status 2 (285.952058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-269945 --alsologtostderr -v=1
E0719 04:13:33.539044   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/calico-823214/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-269945 -n newest-cni-269945
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-269945 -n newest-cni-269945
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-8b424" [cfd7500a-13f1-488c-a342-721199bf8896] Running
E0719 04:14:51.401739   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005026644s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-8b424" [cfd7500a-13f1-488c-a342-721199bf8896] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004278455s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-660685 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-660685 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-660685 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-660685 -n no-preload-660685
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-660685 -n no-preload-660685: exit status 2 (279.214201ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-660685 -n no-preload-660685
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-660685 -n no-preload-660685: exit status 2 (281.441171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-660685 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-660685 -n no-preload-660685
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-660685 -n no-preload-660685
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-8k5mk" [ef4d63e7-659b-46cd-9756-b008cf491789] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003988087s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-8k5mk" [ef4d63e7-659b-46cd-9756-b008cf491789] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003183569s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-863542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-863542 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-863542 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-863542 -n embed-certs-863542
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-863542 -n embed-certs-863542: exit status 2 (258.799423ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-863542 -n embed-certs-863542
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-863542 -n embed-certs-863542: exit status 2 (266.496149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-863542 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-863542 -n embed-certs-863542
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-863542 -n embed-certs-863542
E0719 04:15:32.362091   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/bridge-823214/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-852rp" [583e59c0-5c73-4491-b508-c99bf1d7d5b2] Running
E0719 04:15:54.351620   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/custom-flannel-823214/client.crt: no such file or directory
E0719 04:15:56.372936   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
E0719 04:15:56.378186   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
E0719 04:15:56.388499   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
E0719 04:15:56.408756   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
E0719 04:15:56.449164   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
E0719 04:15:56.529476   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
E0719 04:15:56.689836   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
E0719 04:15:57.010486   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
E0719 04:15:57.650979   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
E0719 04:15:58.931227   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003630744s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-852rp" [583e59c0-5c73-4491-b508-c99bf1d7d5b2] Running
E0719 04:15:59.397203   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/enable-default-cni-823214/client.crt: no such file or directory
E0719 04:16:01.492313   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003706324s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-519188 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-519188 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-519188 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-519188 -n default-k8s-diff-port-519188
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-519188 -n default-k8s-diff-port-519188: exit status 2 (269.166627ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-519188 -n default-k8s-diff-port-519188
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-519188 -n default-k8s-diff-port-519188: exit status 2 (267.524704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-519188 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-519188 -n default-k8s-diff-port-519188
E0719 04:16:06.613569   11900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-5122/.minikube/profiles/old-k8s-version-846763/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-519188 -n default-k8s-diff-port-519188
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.51s)

                                                
                                    

Test skip (26/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-823214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-823214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-823214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-823214"

                                                
                                                
----------------------- debugLogs end: kubenet-823214 [took: 3.513936869s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-823214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-823214
--- SKIP: TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-823214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-823214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-823214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-823214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-823214"

                                                
                                                
----------------------- debugLogs end: cilium-823214 [took: 3.561740449s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-823214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-823214
--- SKIP: TestNetworkPlugins/group/cilium (3.71s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-350587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-350587
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard