Test Report: Docker_Linux_crio 18051

                    
                      a7ac499a82d5d3e781da4a49d780db6ba850b120:2024-02-01:32910
                    
                

Test fail (9/320)

x
+
TestAddons/parallel/Ingress (163.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-642352 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context addons-642352 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.498638458s)
addons_test.go:232: (dbg) Run:  kubectl --context addons-642352 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-642352 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [58372957-d99b-4103-b2c6-ed71643619c1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [58372957-d99b-4103-b2c6-ed71643619c1] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004003718s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-642352 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.914983254s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-642352 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-642352 addons disable ingress-dns --alsologtostderr -v=1: (1.064443062s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-642352 addons disable ingress --alsologtostderr -v=1: (7.686218983s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-642352
helpers_test.go:235: (dbg) docker inspect addons-642352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba9aca09f642738d1e391d3fcd2462426a7803a0e2d60cc2f60823541ed64bf0",
	        "Created": "2024-02-01T09:09:12.895842154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 961921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-01T09:09:13.201657962Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/ba9aca09f642738d1e391d3fcd2462426a7803a0e2d60cc2f60823541ed64bf0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba9aca09f642738d1e391d3fcd2462426a7803a0e2d60cc2f60823541ed64bf0/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba9aca09f642738d1e391d3fcd2462426a7803a0e2d60cc2f60823541ed64bf0/hosts",
	        "LogPath": "/var/lib/docker/containers/ba9aca09f642738d1e391d3fcd2462426a7803a0e2d60cc2f60823541ed64bf0/ba9aca09f642738d1e391d3fcd2462426a7803a0e2d60cc2f60823541ed64bf0-json.log",
	        "Name": "/addons-642352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-642352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-642352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/08232dba750632eb38bb0b2fe50754717cd0b14f441b235797b8891b71e7a73d-init/diff:/var/lib/docker/overlay2/118cd56b7cf3f8f98e5d06fe937de6e8b842264a59a088dbb73626cf7e05fed3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08232dba750632eb38bb0b2fe50754717cd0b14f441b235797b8891b71e7a73d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08232dba750632eb38bb0b2fe50754717cd0b14f441b235797b8891b71e7a73d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08232dba750632eb38bb0b2fe50754717cd0b14f441b235797b8891b71e7a73d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-642352",
	                "Source": "/var/lib/docker/volumes/addons-642352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-642352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "MacAddress": "02:42:c0:a8:31:02",
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-642352",
	                "name.minikube.sigs.k8s.io": "addons-642352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "975dad5884d284464d7166c56e88780b141347c6563aa24ce5ca668f94dfc9b1",
	            "SandboxKey": "/var/run/docker/netns/975dad5884d2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34031"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34030"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34029"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34028"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-642352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ba9aca09f642",
	                        "addons-642352"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "3fb4120869864e9d72f1805b9a71b8e8b6af9ce94c7f797f8fe13608be3baf92",
	                    "EndpointID": "a9ae79bda69855100bb8f30040eeed282c5decccef932715a0120a4c8769d354",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-642352",
	                        "ba9aca09f642"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-642352 -n addons-642352
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-642352 logs -n 25: (1.282192008s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-625877                                                                     | download-only-625877   | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:08 UTC |
	| delete  | -p download-only-057828                                                                     | download-only-057828   | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:08 UTC |
	| start   | --download-only -p                                                                          | download-docker-452662 | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC |                     |
	|         | download-docker-452662                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-452662                                                                   | download-docker-452662 | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:08 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-807134   | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC |                     |
	|         | binary-mirror-807134                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34923                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-807134                                                                     | binary-mirror-807134   | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:08 UTC |
	| addons  | enable dashboard -p                                                                         | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC |                     |
	|         | addons-642352                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC |                     |
	|         | addons-642352                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-642352 --wait=true                                                                | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:11 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:11 UTC | 01 Feb 24 09:11 UTC |
	|         | addons-642352                                                                               |                        |         |         |                     |                     |
	| ip      | addons-642352 ip                                                                            | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:11 UTC | 01 Feb 24 09:11 UTC |
	| addons  | addons-642352 addons disable                                                                | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:11 UTC | 01 Feb 24 09:11 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-642352 ssh curl -s                                                                   | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:11 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:11 UTC | 01 Feb 24 09:11 UTC |
	|         | -p addons-642352                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:11 UTC | 01 Feb 24 09:11 UTC |
	|         | addons-642352                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:11 UTC | 01 Feb 24 09:11 UTC |
	|         | -p addons-642352                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-642352 ssh cat                                                                       | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:11 UTC | 01 Feb 24 09:11 UTC |
	|         | /opt/local-path-provisioner/pvc-5a4495e1-0e0a-490e-9234-87dcffee5021_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-642352 addons disable                                                                | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:11 UTC | 01 Feb 24 09:12 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-642352 addons disable                                                                | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:11 UTC | 01 Feb 24 09:11 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-642352 addons                                                                        | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:12 UTC | 01 Feb 24 09:12 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-642352 addons                                                                        | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:12 UTC | 01 Feb 24 09:12 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-642352 addons                                                                        | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:12 UTC | 01 Feb 24 09:12 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-642352 ip                                                                            | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:13 UTC | 01 Feb 24 09:13 UTC |
	| addons  | addons-642352 addons disable                                                                | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:13 UTC | 01 Feb 24 09:13 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-642352 addons disable                                                                | addons-642352          | jenkins | v1.32.0 | 01 Feb 24 09:13 UTC | 01 Feb 24 09:13 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/01 09:08:51
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0201 09:08:51.669670  961265 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:08:51.669954  961265 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:08:51.669964  961265 out.go:309] Setting ErrFile to fd 2...
	I0201 09:08:51.669969  961265 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:08:51.670158  961265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:08:51.670877  961265 out.go:303] Setting JSON to false
	I0201 09:08:51.671863  961265 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":57079,"bootTime":1706721453,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0201 09:08:51.671947  961265 start.go:138] virtualization: kvm guest
	I0201 09:08:51.674252  961265 out.go:177] * [addons-642352] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0201 09:08:51.675817  961265 out.go:177]   - MINIKUBE_LOCATION=18051
	I0201 09:08:51.675803  961265 notify.go:220] Checking for updates...
	I0201 09:08:51.677509  961265 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0201 09:08:51.679149  961265 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	I0201 09:08:51.680682  961265 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	I0201 09:08:51.682026  961265 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0201 09:08:51.683487  961265 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0201 09:08:51.685035  961265 driver.go:392] Setting default libvirt URI to qemu:///system
	I0201 09:08:51.706959  961265 docker.go:122] docker version: linux-25.0.2:Docker Engine - Community
	I0201 09:08:51.707102  961265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:08:51.762926  961265 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-02-01 09:08:51.75007396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:08:51.763054  961265 docker.go:295] overlay module found
	I0201 09:08:51.766090  961265 out.go:177] * Using the docker driver based on user configuration
	I0201 09:08:51.768467  961265 start.go:298] selected driver: docker
	I0201 09:08:51.768497  961265 start.go:902] validating driver "docker" against <nil>
	I0201 09:08:51.768513  961265 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0201 09:08:51.769404  961265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:08:51.822224  961265 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-02-01 09:08:51.812496942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:08:51.822412  961265 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0201 09:08:51.822638  961265 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0201 09:08:51.824231  961265 out.go:177] * Using Docker driver with root privileges
	I0201 09:08:51.825641  961265 cni.go:84] Creating CNI manager for ""
	I0201 09:08:51.825663  961265 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0201 09:08:51.825675  961265 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0201 09:08:51.825691  961265 start_flags.go:321] config:
	{Name:addons-642352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-642352 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0201 09:08:51.827405  961265 out.go:177] * Starting control plane node addons-642352 in cluster addons-642352
	I0201 09:08:51.828745  961265 cache.go:121] Beginning downloading kic base image for docker with crio
	I0201 09:08:51.830158  961265 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0201 09:08:51.831552  961265 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0201 09:08:51.831605  961265 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0201 09:08:51.831620  961265 cache.go:56] Caching tarball of preloaded images
	I0201 09:08:51.831630  961265 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0201 09:08:51.831734  961265 preload.go:174] Found /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0201 09:08:51.831756  961265 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0201 09:08:51.832116  961265 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/config.json ...
	I0201 09:08:51.832147  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/config.json: {Name:mk506e4fa5282228eed2a690e4dcba8c71c2e923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:08:51.847740  961265 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0201 09:08:51.847899  961265 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0201 09:08:51.847921  961265 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0201 09:08:51.847928  961265 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0201 09:08:51.847942  961265 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0201 09:08:51.847954  961265 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0201 09:09:03.392056  961265 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0201 09:09:03.392097  961265 cache.go:194] Successfully downloaded all kic artifacts
	I0201 09:09:03.392145  961265 start.go:365] acquiring machines lock for addons-642352: {Name:mk0329e506b4aa0b70097346accee9e5da4e37de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0201 09:09:03.392253  961265 start.go:369] acquired machines lock for "addons-642352" in 85.167µs
	I0201 09:09:03.392276  961265 start.go:93] Provisioning new machine with config: &{Name:addons-642352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-642352 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0201 09:09:03.392394  961265 start.go:125] createHost starting for "" (driver="docker")
	I0201 09:09:03.394535  961265 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0201 09:09:03.394797  961265 start.go:159] libmachine.API.Create for "addons-642352" (driver="docker")
	I0201 09:09:03.394825  961265 client.go:168] LocalClient.Create starting
	I0201 09:09:03.394986  961265 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem
	I0201 09:09:03.568980  961265 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/cert.pem
	I0201 09:09:03.674325  961265 cli_runner.go:164] Run: docker network inspect addons-642352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0201 09:09:03.690523  961265 cli_runner.go:211] docker network inspect addons-642352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0201 09:09:03.690604  961265 network_create.go:281] running [docker network inspect addons-642352] to gather additional debugging logs...
	I0201 09:09:03.690644  961265 cli_runner.go:164] Run: docker network inspect addons-642352
	W0201 09:09:03.706562  961265 cli_runner.go:211] docker network inspect addons-642352 returned with exit code 1
	I0201 09:09:03.706601  961265 network_create.go:284] error running [docker network inspect addons-642352]: docker network inspect addons-642352: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-642352 not found
	I0201 09:09:03.706614  961265 network_create.go:286] output of [docker network inspect addons-642352]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-642352 not found
	
	** /stderr **
	I0201 09:09:03.706723  961265 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0201 09:09:03.724897  961265 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00206ec80}
	I0201 09:09:03.724955  961265 network_create.go:124] attempt to create docker network addons-642352 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0201 09:09:03.725096  961265 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-642352 addons-642352
	I0201 09:09:03.785501  961265 network_create.go:108] docker network addons-642352 192.168.49.0/24 created
	I0201 09:09:03.785534  961265 kic.go:121] calculated static IP "192.168.49.2" for the "addons-642352" container
	I0201 09:09:03.785604  961265 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0201 09:09:03.801814  961265 cli_runner.go:164] Run: docker volume create addons-642352 --label name.minikube.sigs.k8s.io=addons-642352 --label created_by.minikube.sigs.k8s.io=true
	I0201 09:09:03.819955  961265 oci.go:103] Successfully created a docker volume addons-642352
	I0201 09:09:03.820041  961265 cli_runner.go:164] Run: docker run --rm --name addons-642352-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-642352 --entrypoint /usr/bin/test -v addons-642352:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0201 09:09:07.606044  961265 cli_runner.go:217] Completed: docker run --rm --name addons-642352-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-642352 --entrypoint /usr/bin/test -v addons-642352:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (3.785961607s)
	I0201 09:09:07.606078  961265 oci.go:107] Successfully prepared a docker volume addons-642352
	I0201 09:09:07.606117  961265 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0201 09:09:07.606145  961265 kic.go:194] Starting extracting preloaded images to volume ...
	I0201 09:09:07.606211  961265 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-642352:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0201 09:09:12.826375  961265 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-642352:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.220115303s)
	I0201 09:09:12.826432  961265 kic.go:203] duration metric: took 5.220283 seconds to extract preloaded images to volume
	W0201 09:09:12.826595  961265 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0201 09:09:12.826719  961265 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0201 09:09:12.878126  961265 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-642352 --name addons-642352 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-642352 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-642352 --network addons-642352 --ip 192.168.49.2 --volume addons-642352:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0201 09:09:13.209618  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Running}}
	I0201 09:09:13.227279  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:13.245549  961265 cli_runner.go:164] Run: docker exec addons-642352 stat /var/lib/dpkg/alternatives/iptables
	I0201 09:09:13.289307  961265 oci.go:144] the created container "addons-642352" has a running status.
	I0201 09:09:13.289339  961265 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa...
	I0201 09:09:13.403981  961265 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0201 09:09:13.425549  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:13.445754  961265 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0201 09:09:13.445780  961265 kic_runner.go:114] Args: [docker exec --privileged addons-642352 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0201 09:09:13.490199  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:13.510118  961265 machine.go:88] provisioning docker machine ...
	I0201 09:09:13.510176  961265 ubuntu.go:169] provisioning hostname "addons-642352"
	I0201 09:09:13.510251  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:13.527852  961265 main.go:141] libmachine: Using SSH client type: native
	I0201 09:09:13.528230  961265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 127.0.0.1 34031 <nil> <nil>}
	I0201 09:09:13.528253  961265 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-642352 && echo "addons-642352" | sudo tee /etc/hostname
	I0201 09:09:13.528867  961265 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50534->127.0.0.1:34031: read: connection reset by peer
	I0201 09:09:16.678729  961265 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-642352
	
	I0201 09:09:16.678830  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:16.695704  961265 main.go:141] libmachine: Using SSH client type: native
	I0201 09:09:16.696081  961265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 127.0.0.1 34031 <nil> <nil>}
	I0201 09:09:16.696101  961265 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-642352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-642352/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-642352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0201 09:09:16.830889  961265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0201 09:09:16.830937  961265 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18051-952908/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-952908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-952908/.minikube}
	I0201 09:09:16.830962  961265 ubuntu.go:177] setting up certificates
	I0201 09:09:16.830973  961265 provision.go:83] configureAuth start
	I0201 09:09:16.831031  961265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-642352
	I0201 09:09:16.847953  961265 provision.go:138] copyHostCerts
	I0201 09:09:16.848027  961265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-952908/.minikube/ca.pem (1078 bytes)
	I0201 09:09:16.848147  961265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-952908/.minikube/cert.pem (1123 bytes)
	I0201 09:09:16.848206  961265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-952908/.minikube/key.pem (1675 bytes)
	I0201 09:09:16.848255  961265 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-952908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca-key.pem org=jenkins.addons-642352 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-642352]
	I0201 09:09:16.935014  961265 provision.go:172] copyRemoteCerts
	I0201 09:09:16.935075  961265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0201 09:09:16.935115  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:16.951703  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:17.046857  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0201 09:09:17.068098  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0201 09:09:17.088978  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0201 09:09:17.110156  961265 provision.go:86] duration metric: configureAuth took 279.168949ms
	I0201 09:09:17.110183  961265 ubuntu.go:193] setting minikube options for container-runtime
	I0201 09:09:17.110386  961265 config.go:182] Loaded profile config "addons-642352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:09:17.110543  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:17.126712  961265 main.go:141] libmachine: Using SSH client type: native
	I0201 09:09:17.127039  961265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 127.0.0.1 34031 <nil> <nil>}
	I0201 09:09:17.127057  961265 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0201 09:09:17.347745  961265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0201 09:09:17.347773  961265 machine.go:91] provisioned docker machine in 3.837629984s
	I0201 09:09:17.347786  961265 client.go:171] LocalClient.Create took 13.95295104s
	I0201 09:09:17.347810  961265 start.go:167] duration metric: libmachine.API.Create for "addons-642352" took 13.953013348s
	I0201 09:09:17.347824  961265 start.go:300] post-start starting for "addons-642352" (driver="docker")
	I0201 09:09:17.347838  961265 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0201 09:09:17.347894  961265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0201 09:09:17.347941  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:17.364950  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:17.459781  961265 ssh_runner.go:195] Run: cat /etc/os-release
	I0201 09:09:17.463105  961265 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0201 09:09:17.463152  961265 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0201 09:09:17.463167  961265 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0201 09:09:17.463177  961265 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0201 09:09:17.463191  961265 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-952908/.minikube/addons for local assets ...
	I0201 09:09:17.463259  961265 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-952908/.minikube/files for local assets ...
	I0201 09:09:17.463291  961265 start.go:303] post-start completed in 115.458192ms
	I0201 09:09:17.463575  961265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-642352
	I0201 09:09:17.480363  961265 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/config.json ...
	I0201 09:09:17.480640  961265 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0201 09:09:17.480695  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:17.497425  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:17.587367  961265 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0201 09:09:17.591772  961265 start.go:128] duration metric: createHost completed in 14.199360487s
	I0201 09:09:17.591799  961265 start.go:83] releasing machines lock for "addons-642352", held for 14.199536094s
	I0201 09:09:17.591869  961265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-642352
	I0201 09:09:17.608601  961265 ssh_runner.go:195] Run: cat /version.json
	I0201 09:09:17.608646  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:17.608670  961265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0201 09:09:17.608764  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:17.627687  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:17.627789  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:17.813483  961265 ssh_runner.go:195] Run: systemctl --version
	I0201 09:09:17.817796  961265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0201 09:09:17.955033  961265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0201 09:09:17.960639  961265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0201 09:09:17.979028  961265 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0201 09:09:17.979119  961265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0201 09:09:18.005958  961265 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0201 09:09:18.005983  961265 start.go:475] detecting cgroup driver to use...
	I0201 09:09:18.006016  961265 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0201 09:09:18.006058  961265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0201 09:09:18.020498  961265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0201 09:09:18.031323  961265 docker.go:217] disabling cri-docker service (if available) ...
	I0201 09:09:18.031403  961265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0201 09:09:18.044317  961265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0201 09:09:18.057191  961265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0201 09:09:18.131732  961265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0201 09:09:18.207232  961265 docker.go:233] disabling docker service ...
	I0201 09:09:18.207316  961265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0201 09:09:18.226453  961265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0201 09:09:18.236959  961265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0201 09:09:18.315422  961265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0201 09:09:18.395960  961265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0201 09:09:18.406512  961265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0201 09:09:18.420730  961265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0201 09:09:18.420792  961265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0201 09:09:18.429621  961265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0201 09:09:18.429683  961265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0201 09:09:18.438390  961265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0201 09:09:18.447103  961265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0201 09:09:18.456037  961265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0201 09:09:18.464499  961265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0201 09:09:18.472274  961265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0201 09:09:18.480144  961265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0201 09:09:18.551612  961265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0201 09:09:18.660132  961265 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0201 09:09:18.660225  961265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0201 09:09:18.663686  961265 start.go:543] Will wait 60s for crictl version
	I0201 09:09:18.663733  961265 ssh_runner.go:195] Run: which crictl
	I0201 09:09:18.666982  961265 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0201 09:09:18.702079  961265 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0201 09:09:18.702186  961265 ssh_runner.go:195] Run: crio --version
	I0201 09:09:18.740106  961265 ssh_runner.go:195] Run: crio --version
	I0201 09:09:18.778328  961265 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0201 09:09:18.780157  961265 cli_runner.go:164] Run: docker network inspect addons-642352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0201 09:09:18.797471  961265 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0201 09:09:18.801282  961265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0201 09:09:18.811912  961265 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0201 09:09:18.811969  961265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0201 09:09:18.867057  961265 crio.go:496] all images are preloaded for cri-o runtime.
	I0201 09:09:18.867084  961265 crio.go:415] Images already preloaded, skipping extraction
	I0201 09:09:18.867140  961265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0201 09:09:18.899554  961265 crio.go:496] all images are preloaded for cri-o runtime.
	I0201 09:09:18.899584  961265 cache_images.go:84] Images are preloaded, skipping loading
	I0201 09:09:18.899653  961265 ssh_runner.go:195] Run: crio config
	I0201 09:09:18.940574  961265 cni.go:84] Creating CNI manager for ""
	I0201 09:09:18.940596  961265 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0201 09:09:18.940616  961265 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0201 09:09:18.940638  961265 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-642352 NodeName:addons-642352 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0201 09:09:18.940837  961265 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-642352"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0201 09:09:18.940932  961265 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-642352 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-642352 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0201 09:09:18.941006  961265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0201 09:09:18.949429  961265 binaries.go:44] Found k8s binaries, skipping transfer
	I0201 09:09:18.949499  961265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0201 09:09:18.957612  961265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0201 09:09:18.973693  961265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0201 09:09:18.989912  961265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0201 09:09:19.005857  961265 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0201 09:09:19.009056  961265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0201 09:09:19.018722  961265 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352 for IP: 192.168.49.2
	I0201 09:09:19.018764  961265 certs.go:190] acquiring lock for shared ca certs: {Name:mk23a064dbf71f5683ee734795fa9d1b12119a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.018877  961265 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.key
	I0201 09:09:19.088342  961265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt ...
	I0201 09:09:19.088377  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt: {Name:mk8450580b08f8de8f4caaabc244b0b9a3e07465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.088546  961265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-952908/.minikube/ca.key ...
	I0201 09:09:19.088559  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/ca.key: {Name:mk91cd72630134b0a10147cff7c5d02901665741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.088629  961265 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.key
	I0201 09:09:19.357748  961265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.crt ...
	I0201 09:09:19.357783  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.crt: {Name:mk66b0f49e83a54bef9524ee91b65f71d36e089f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.357951  961265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.key ...
	I0201 09:09:19.357962  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.key: {Name:mk6b48ad38fe9fca4cac1d28ae7c569405eab3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.358072  961265 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.key
	I0201 09:09:19.358086  961265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt with IP's: []
	I0201 09:09:19.423244  961265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt ...
	I0201 09:09:19.423279  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: {Name:mk4e8d63d0bbef158801ff5e35453830c343311f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.423436  961265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.key ...
	I0201 09:09:19.423447  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.key: {Name:mk3ae9cf645fb1aef89a6dde78e942a7398b99bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.423519  961265 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.key.dd3b5fb2
	I0201 09:09:19.423537  961265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0201 09:09:19.701686  961265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.crt.dd3b5fb2 ...
	I0201 09:09:19.701723  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.crt.dd3b5fb2: {Name:mka4bc6482a7c74cd5a9648132e0c73303a055e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.701896  961265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.key.dd3b5fb2 ...
	I0201 09:09:19.701911  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.key.dd3b5fb2: {Name:mk85c9bfb68d6bab490146263d1bf431892ec4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.701986  961265 certs.go:337] copying /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.crt
	I0201 09:09:19.702051  961265 certs.go:341] copying /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.key
	I0201 09:09:19.702093  961265 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/proxy-client.key
	I0201 09:09:19.702112  961265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/proxy-client.crt with IP's: []
	I0201 09:09:19.832638  961265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/proxy-client.crt ...
	I0201 09:09:19.832674  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/proxy-client.crt: {Name:mk61769422e207c5e1f67fa2a72f5b75fd28e6bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.832849  961265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/proxy-client.key ...
	I0201 09:09:19.832863  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/proxy-client.key: {Name:mkfc418ee52ab04e61a516cbff6ec8d9d63c7c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:19.833065  961265 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca-key.pem (1679 bytes)
	I0201 09:09:19.833103  961265 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem (1078 bytes)
	I0201 09:09:19.833125  961265 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/home/jenkins/minikube-integration/18051-952908/.minikube/certs/cert.pem (1123 bytes)
	I0201 09:09:19.833146  961265 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/home/jenkins/minikube-integration/18051-952908/.minikube/certs/key.pem (1675 bytes)
	I0201 09:09:19.833737  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0201 09:09:19.856837  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0201 09:09:19.879143  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0201 09:09:19.901441  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0201 09:09:19.923291  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0201 09:09:19.945125  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0201 09:09:19.966737  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0201 09:09:19.987832  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0201 09:09:20.009196  961265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0201 09:09:20.030994  961265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0201 09:09:20.046938  961265 ssh_runner.go:195] Run: openssl version
	I0201 09:09:20.052019  961265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0201 09:09:20.060548  961265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0201 09:09:20.063919  961265 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb  1 09:09 /usr/share/ca-certificates/minikubeCA.pem
	I0201 09:09:20.063963  961265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0201 09:09:20.070126  961265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0201 09:09:20.078526  961265 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0201 09:09:20.081686  961265 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0201 09:09:20.081743  961265 kubeadm.go:404] StartCluster: {Name:addons-642352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-642352 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0201 09:09:20.081822  961265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0201 09:09:20.081883  961265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0201 09:09:20.115818  961265 cri.go:89] found id: ""
	I0201 09:09:20.115879  961265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0201 09:09:20.124121  961265 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0201 09:09:20.132398  961265 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0201 09:09:20.132452  961265 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0201 09:09:20.140214  961265 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0201 09:09:20.140262  961265 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0201 09:09:20.183277  961265 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0201 09:09:20.183353  961265 kubeadm.go:322] [preflight] Running pre-flight checks
	I0201 09:09:20.220765  961265 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0201 09:09:20.220862  961265 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-gcp
	I0201 09:09:20.220912  961265 kubeadm.go:322] OS: Linux
	I0201 09:09:20.220963  961265 kubeadm.go:322] CGROUPS_CPU: enabled
	I0201 09:09:20.221029  961265 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0201 09:09:20.221092  961265 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0201 09:09:20.221146  961265 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0201 09:09:20.221187  961265 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0201 09:09:20.221277  961265 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0201 09:09:20.221356  961265 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0201 09:09:20.221442  961265 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0201 09:09:20.221501  961265 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0201 09:09:20.284833  961265 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0201 09:09:20.284988  961265 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0201 09:09:20.285129  961265 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0201 09:09:20.485762  961265 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0201 09:09:20.488667  961265 out.go:204]   - Generating certificates and keys ...
	I0201 09:09:20.488789  961265 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0201 09:09:20.488887  961265 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0201 09:09:20.670233  961265 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0201 09:09:20.759031  961265 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0201 09:09:20.916162  961265 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0201 09:09:21.000869  961265 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0201 09:09:21.218037  961265 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0201 09:09:21.218220  961265 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-642352 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0201 09:09:21.405168  961265 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0201 09:09:21.405327  961265 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-642352 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0201 09:09:21.729915  961265 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0201 09:09:22.024394  961265 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0201 09:09:22.107168  961265 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0201 09:09:22.107286  961265 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0201 09:09:22.165730  961265 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0201 09:09:22.257276  961265 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0201 09:09:22.358563  961265 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0201 09:09:22.565604  961265 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0201 09:09:22.567150  961265 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0201 09:09:22.569348  961265 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0201 09:09:22.571545  961265 out.go:204]   - Booting up control plane ...
	I0201 09:09:22.571646  961265 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0201 09:09:22.571740  961265 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0201 09:09:22.571829  961265 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0201 09:09:22.579873  961265 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0201 09:09:22.580591  961265 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0201 09:09:22.580668  961265 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0201 09:09:22.666877  961265 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0201 09:09:27.669039  961265 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002277 seconds
	I0201 09:09:27.669228  961265 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0201 09:09:27.684587  961265 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0201 09:09:28.204990  961265 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0201 09:09:28.205191  961265 kubeadm.go:322] [mark-control-plane] Marking the node addons-642352 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0201 09:09:28.715329  961265 kubeadm.go:322] [bootstrap-token] Using token: fkfict.yuj3rfx43mdvtl9z
	I0201 09:09:28.716948  961265 out.go:204]   - Configuring RBAC rules ...
	I0201 09:09:28.717077  961265 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0201 09:09:28.721423  961265 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0201 09:09:28.728086  961265 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0201 09:09:28.730962  961265 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0201 09:09:28.737240  961265 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0201 09:09:28.742431  961265 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0201 09:09:28.758012  961265 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0201 09:09:28.958867  961265 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0201 09:09:29.135285  961265 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0201 09:09:29.136898  961265 kubeadm.go:322] 
	I0201 09:09:29.137084  961265 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0201 09:09:29.137102  961265 kubeadm.go:322] 
	I0201 09:09:29.137184  961265 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0201 09:09:29.137195  961265 kubeadm.go:322] 
	I0201 09:09:29.137224  961265 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0201 09:09:29.137285  961265 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0201 09:09:29.137358  961265 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0201 09:09:29.137370  961265 kubeadm.go:322] 
	I0201 09:09:29.137425  961265 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0201 09:09:29.137432  961265 kubeadm.go:322] 
	I0201 09:09:29.137497  961265 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0201 09:09:29.137507  961265 kubeadm.go:322] 
	I0201 09:09:29.137563  961265 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0201 09:09:29.137657  961265 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0201 09:09:29.137742  961265 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0201 09:09:29.137756  961265 kubeadm.go:322] 
	I0201 09:09:29.137891  961265 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0201 09:09:29.138002  961265 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0201 09:09:29.138010  961265 kubeadm.go:322] 
	I0201 09:09:29.138109  961265 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fkfict.yuj3rfx43mdvtl9z \
	I0201 09:09:29.138235  961265 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7910553c67bf33c7893af1499c33a494f0bc07d5d4917285901e8697cae63a23 \
	I0201 09:09:29.138267  961265 kubeadm.go:322] 	--control-plane 
	I0201 09:09:29.138285  961265 kubeadm.go:322] 
	I0201 09:09:29.138420  961265 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0201 09:09:29.138437  961265 kubeadm.go:322] 
	I0201 09:09:29.138532  961265 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fkfict.yuj3rfx43mdvtl9z \
	I0201 09:09:29.138656  961265 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7910553c67bf33c7893af1499c33a494f0bc07d5d4917285901e8697cae63a23 
	I0201 09:09:29.141008  961265 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-gcp\n", err: exit status 1
	I0201 09:09:29.141134  961265 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0201 09:09:29.141165  961265 cni.go:84] Creating CNI manager for ""
	I0201 09:09:29.141175  961265 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0201 09:09:29.142900  961265 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0201 09:09:29.144446  961265 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0201 09:09:29.149570  961265 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0201 09:09:29.149590  961265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0201 09:09:29.168496  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0201 09:09:29.935985  961265 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0201 09:09:29.936073  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:29.936091  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=addons-642352 minikube.k8s.io/updated_at=2024_02_01T09_09_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:29.943226  961265 ops.go:34] apiserver oom_adj: -16
	I0201 09:09:30.045308  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:30.545364  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:31.046182  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:31.546085  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:32.045472  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:32.545807  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:33.046045  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:33.546351  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:34.045491  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:34.546330  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:35.045734  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:35.546302  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:36.045679  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:36.545443  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:37.045564  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:37.545339  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:38.045410  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:38.546067  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:39.046051  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:39.545549  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:40.045939  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:40.546198  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:41.045833  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:41.546382  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:42.046182  961265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:09:42.117313  961265 kubeadm.go:1088] duration metric: took 12.181310623s to wait for elevateKubeSystemPrivileges.
	I0201 09:09:42.117352  961265 kubeadm.go:406] StartCluster complete in 22.035616435s
	I0201 09:09:42.117372  961265 settings.go:142] acquiring lock: {Name:mk0819893db79284ba714854fba438996c690ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:42.117477  961265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-952908/kubeconfig
	I0201 09:09:42.117945  961265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/kubeconfig: {Name:mk4dec6d7936952ed996b642fbbfa2a496c41523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:09:42.118291  961265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0201 09:09:42.118377  961265 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0201 09:09:42.118521  961265 config.go:182] Loaded profile config "addons-642352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:09:42.118535  961265 addons.go:69] Setting default-storageclass=true in profile "addons-642352"
	I0201 09:09:42.118549  961265 addons.go:69] Setting yakd=true in profile "addons-642352"
	I0201 09:09:42.118561  961265 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-642352"
	I0201 09:09:42.118568  961265 addons.go:234] Setting addon yakd=true in "addons-642352"
	I0201 09:09:42.118568  961265 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-642352"
	I0201 09:09:42.118554  961265 addons.go:69] Setting cloud-spanner=true in profile "addons-642352"
	I0201 09:09:42.118589  961265 addons.go:234] Setting addon cloud-spanner=true in "addons-642352"
	I0201 09:09:42.118605  961265 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-642352"
	I0201 09:09:42.118619  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.118634  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.118651  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.118948  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.119091  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.119135  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.119150  961265 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-642352"
	I0201 09:09:42.119165  961265 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-642352"
	I0201 09:09:42.119179  961265 addons.go:69] Setting metrics-server=true in profile "addons-642352"
	I0201 09:09:42.119206  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.119213  961265 addons.go:234] Setting addon metrics-server=true in "addons-642352"
	I0201 09:09:42.119237  961265 addons.go:69] Setting storage-provisioner=true in profile "addons-642352"
	I0201 09:09:42.119257  961265 addons.go:234] Setting addon storage-provisioner=true in "addons-642352"
	I0201 09:09:42.119257  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.119254  961265 addons.go:69] Setting registry=true in profile "addons-642352"
	I0201 09:09:42.119280  961265 addons.go:234] Setting addon registry=true in "addons-642352"
	I0201 09:09:42.119293  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.119325  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.119369  961265 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-642352"
	I0201 09:09:42.119389  961265 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-642352"
	I0201 09:09:42.119468  961265 addons.go:69] Setting volumesnapshots=true in profile "addons-642352"
	I0201 09:09:42.119479  961265 addons.go:234] Setting addon volumesnapshots=true in "addons-642352"
	I0201 09:09:42.119515  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.119629  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.119714  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.119733  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.119742  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.119947  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.120332  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.119135  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.122844  961265 addons.go:69] Setting gcp-auth=true in profile "addons-642352"
	I0201 09:09:42.122875  961265 mustload.go:65] Loading cluster: addons-642352
	I0201 09:09:42.123093  961265 config.go:182] Loaded profile config "addons-642352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:09:42.123339  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.124624  961265 addons.go:69] Setting ingress-dns=true in profile "addons-642352"
	I0201 09:09:42.124648  961265 addons.go:234] Setting addon ingress-dns=true in "addons-642352"
	I0201 09:09:42.124712  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.124791  961265 addons.go:69] Setting helm-tiller=true in profile "addons-642352"
	I0201 09:09:42.124814  961265 addons.go:234] Setting addon helm-tiller=true in "addons-642352"
	I0201 09:09:42.124860  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.125201  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.125346  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.125663  961265 addons.go:69] Setting ingress=true in profile "addons-642352"
	I0201 09:09:42.125684  961265 addons.go:234] Setting addon ingress=true in "addons-642352"
	I0201 09:09:42.125738  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.126037  961265 addons.go:69] Setting inspektor-gadget=true in profile "addons-642352"
	I0201 09:09:42.126092  961265 addons.go:234] Setting addon inspektor-gadget=true in "addons-642352"
	I0201 09:09:42.126154  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.139814  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.142954  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.166152  961265 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0201 09:09:42.168251  961265 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0201 09:09:42.168276  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0201 09:09:42.168339  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.169282  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.176989  961265 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0201 09:09:42.179565  961265 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0201 09:09:42.179595  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0201 09:09:42.179663  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.183069  961265 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0201 09:09:42.184790  961265 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0201 09:09:42.184825  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0201 09:09:42.184887  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.186065  961265 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0201 09:09:42.189392  961265 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0201 09:09:42.189417  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0201 09:09:42.189499  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.194272  961265 addons.go:234] Setting addon default-storageclass=true in "addons-642352"
	I0201 09:09:42.194339  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.194863  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.198374  961265 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0201 09:09:42.203615  961265 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0201 09:09:42.203645  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0201 09:09:42.203710  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.205699  961265 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0201 09:09:42.211702  961265 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0201 09:09:42.212669  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0201 09:09:42.212746  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.212933  961265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0201 09:09:42.211791  961265 out.go:177]   - Using image docker.io/registry:2.8.3
	I0201 09:09:42.212001  961265 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0201 09:09:42.212191  961265 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0201 09:09:42.216024  961265 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0201 09:09:42.216046  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0201 09:09:42.216114  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.217596  961265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0201 09:09:42.217619  961265 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-642352"
	I0201 09:09:42.219075  961265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0201 09:09:42.219143  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:42.220710  961265 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0201 09:09:42.220734  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0201 09:09:42.220797  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.222496  961265 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0201 09:09:42.220995  961265 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0201 09:09:42.221295  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:42.223938  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.224068  961265 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0201 09:09:42.225844  961265 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0201 09:09:42.227708  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0201 09:09:42.227746  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0201 09:09:42.229124  961265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0201 09:09:42.227789  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.227805  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.232520  961265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0201 09:09:42.234610  961265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0201 09:09:42.238450  961265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0201 09:09:42.241733  961265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0201 09:09:42.241712  961265 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0201 09:09:42.244156  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.249977  961265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0201 09:09:42.247905  961265 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0201 09:09:42.248382  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.252481  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.252559  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.254622  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0201 09:09:42.254710  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.259856  961265 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0201 09:09:42.257680  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.263516  961265 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0201 09:09:42.261272  961265 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0201 09:09:42.265187  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0201 09:09:42.266854  961265 out.go:177]   - Using image docker.io/busybox:stable
	I0201 09:09:42.265262  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.268702  961265 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0201 09:09:42.268721  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0201 09:09:42.268778  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.272212  961265 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0201 09:09:42.272232  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0201 09:09:42.272281  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:42.280015  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.286526  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.294819  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.298901  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.300699  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.300880  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.307556  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.311537  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:42.350575  961265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0201 09:09:42.635019  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0201 09:09:42.643883  961265 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0201 09:09:42.643924  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0201 09:09:42.651441  961265 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-642352" context rescaled to 1 replicas
	I0201 09:09:42.651496  961265 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0201 09:09:42.653572  961265 out.go:177] * Verifying Kubernetes components...
	I0201 09:09:42.653179  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0201 09:09:42.655201  961265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0201 09:09:42.743384  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0201 09:09:42.833726  961265 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0201 09:09:42.833823  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0201 09:09:42.834326  961265 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0201 09:09:42.834382  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0201 09:09:42.848326  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0201 09:09:42.943761  961265 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0201 09:09:42.943863  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0201 09:09:42.944267  961265 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0201 09:09:42.944331  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0201 09:09:42.945660  961265 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0201 09:09:42.945717  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0201 09:09:42.951360  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0201 09:09:42.957162  961265 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0201 09:09:42.957190  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0201 09:09:43.043661  961265 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0201 09:09:43.043690  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0201 09:09:43.045923  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0201 09:09:43.049644  961265 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0201 09:09:43.049734  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0201 09:09:43.132956  961265 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0201 09:09:43.133053  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0201 09:09:43.138535  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0201 09:09:43.147830  961265 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0201 09:09:43.147928  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0201 09:09:43.232406  961265 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0201 09:09:43.232492  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0201 09:09:43.235099  961265 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0201 09:09:43.235176  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0201 09:09:43.239429  961265 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0201 09:09:43.239462  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0201 09:09:43.332119  961265 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0201 09:09:43.332240  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0201 09:09:43.441924  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0201 09:09:43.455008  961265 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0201 09:09:43.455068  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0201 09:09:43.532587  961265 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0201 09:09:43.532682  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0201 09:09:43.546438  961265 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0201 09:09:43.546521  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0201 09:09:43.547895  961265 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0201 09:09:43.547964  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0201 09:09:43.634685  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0201 09:09:43.641986  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0201 09:09:43.832394  961265 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0201 09:09:43.832505  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0201 09:09:43.839319  961265 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0201 09:09:43.839417  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0201 09:09:43.850723  961265 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0201 09:09:43.850837  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0201 09:09:44.031042  961265 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0201 09:09:44.031136  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0201 09:09:44.234986  961265 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0201 09:09:44.235087  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0201 09:09:44.241023  961265 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0201 09:09:44.241165  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0201 09:09:44.332067  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0201 09:09:44.636750  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0201 09:09:44.742001  961265 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0201 09:09:44.742127  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0201 09:09:44.844568  961265 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0201 09:09:44.844655  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0201 09:09:45.040114  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0201 09:09:45.341209  961265 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0201 09:09:45.341302  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0201 09:09:45.443247  961265 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.092615936s)
	I0201 09:09:45.443287  961265 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0201 09:09:45.732824  961265 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0201 09:09:45.732936  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0201 09:09:45.849683  961265 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0201 09:09:45.849719  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0201 09:09:46.232782  961265 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0201 09:09:46.232842  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0201 09:09:46.837648  961265 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0201 09:09:46.837742  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0201 09:09:47.131082  961265 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0201 09:09:47.131173  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0201 09:09:47.451779  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0201 09:09:47.655627  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.020553961s)
	I0201 09:09:47.655716  961265 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.000491953s)
	I0201 09:09:47.656798  961265 node_ready.go:35] waiting up to 6m0s for node "addons-642352" to be "Ready" ...
	I0201 09:09:47.657056  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.001868939s)
	I0201 09:09:49.033999  961265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0201 09:09:49.034140  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:49.047509  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.304073281s)
	I0201 09:09:49.047553  961265 addons.go:470] Verifying addon ingress=true in "addons-642352"
	I0201 09:09:49.049224  961265 out.go:177] * Verifying ingress addon...
	I0201 09:09:49.047628  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.199263097s)
	I0201 09:09:49.047750  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.096343787s)
	I0201 09:09:49.047822  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.001853722s)
	I0201 09:09:49.047901  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.90933897s)
	I0201 09:09:49.047991  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.605963625s)
	I0201 09:09:49.049610  961265 addons.go:470] Verifying addon metrics-server=true in "addons-642352"
	I0201 09:09:49.048050  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.41324886s)
	I0201 09:09:49.048078  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.406051954s)
	I0201 09:09:49.049705  961265 addons.go:470] Verifying addon registry=true in "addons-642352"
	I0201 09:09:49.048105  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.71594539s)
	I0201 09:09:49.048235  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.411438402s)
	I0201 09:09:49.048281  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.008077533s)
	I0201 09:09:49.051450  961265 out.go:177] * Verifying registry addon...
	W0201 09:09:49.051674  961265 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0201 09:09:49.057921  961265 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0201 09:09:49.063015  961265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0201 09:09:49.063289  961265 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-642352 service yakd-dashboard -n yakd-dashboard
	
	I0201 09:09:49.065708  961265 retry.go:31] will retry after 286.419605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0201 09:09:49.067860  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	W0201 09:09:49.143069  961265 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0201 09:09:49.145347  961265 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0201 09:09:49.145381  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:49.145704  961265 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0201 09:09:49.145734  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:49.349774  961265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0201 09:09:49.353127  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0201 09:09:49.370209  961265 addons.go:234] Setting addon gcp-auth=true in "addons-642352"
	I0201 09:09:49.370279  961265 host.go:66] Checking if "addons-642352" exists ...
	I0201 09:09:49.370849  961265 cli_runner.go:164] Run: docker container inspect addons-642352 --format={{.State.Status}}
	I0201 09:09:49.394003  961265 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0201 09:09:49.394056  961265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-642352
	I0201 09:09:49.410797  961265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/addons-642352/id_rsa Username:docker}
	I0201 09:09:49.567027  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:49.570238  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:49.660040  961265 node_ready.go:58] node "addons-642352" has status "Ready":"False"
	I0201 09:09:50.068041  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:50.069856  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:50.567087  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:50.569595  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:51.139646  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:51.140558  961265 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0201 09:09:51.140587  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:51.160531  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.708630502s)
	I0201 09:09:51.160576  961265 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-642352"
	I0201 09:09:51.160595  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.807387605s)
	I0201 09:09:51.162513  961265 out.go:177] * Verifying csi-hostpath-driver addon...
	I0201 09:09:51.162557  961265 node_ready.go:49] node "addons-642352" has status "Ready":"True"
	I0201 09:09:51.163855  961265 node_ready.go:38] duration metric: took 3.507021919s waiting for node "addons-642352" to be "Ready" ...
	I0201 09:09:51.163876  961265 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0201 09:09:51.160648  961265 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.766619506s)
	I0201 09:09:51.165703  961265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0201 09:09:51.164697  961265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0201 09:09:51.169552  961265 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0201 09:09:51.171533  961265 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0201 09:09:51.171555  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0201 09:09:51.174136  961265 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-97z46" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:51.235285  961265 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0201 09:09:51.235313  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:51.251913  961265 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0201 09:09:51.251942  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0201 09:09:51.273974  961265 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0201 09:09:51.273998  961265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0201 09:09:51.349895  961265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0201 09:09:51.568335  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:51.570491  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:51.673110  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:52.136974  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:52.138439  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:52.235628  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:52.642220  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:52.650815  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:52.737286  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:53.132815  961265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.782815492s)
	I0201 09:09:53.134848  961265 addons.go:470] Verifying addon gcp-auth=true in "addons-642352"
	I0201 09:09:53.136655  961265 out.go:177] * Verifying gcp-auth addon...
	I0201 09:09:53.139478  961265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0201 09:09:53.145247  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:53.146811  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:53.154163  961265 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0201 09:09:53.154244  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:53.235830  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:53.241650  961265 pod_ready.go:102] pod "coredns-5dd5756b68-97z46" in "kube-system" namespace has status "Ready":"False"
	I0201 09:09:53.570304  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:53.634083  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:53.644543  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:53.737445  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:54.137922  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:54.138095  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:54.144132  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:54.236445  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:54.635992  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:54.637996  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:54.644417  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:54.737533  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:54.739369  961265 pod_ready.go:92] pod "coredns-5dd5756b68-97z46" in "kube-system" namespace has status "Ready":"True"
	I0201 09:09:54.739410  961265 pod_ready.go:81] duration metric: took 3.565246511s waiting for pod "coredns-5dd5756b68-97z46" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:54.739440  961265 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-642352" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:54.747939  961265 pod_ready.go:92] pod "etcd-addons-642352" in "kube-system" namespace has status "Ready":"True"
	I0201 09:09:54.747968  961265 pod_ready.go:81] duration metric: took 8.516298ms waiting for pod "etcd-addons-642352" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:54.747984  961265 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-642352" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:54.756753  961265 pod_ready.go:92] pod "kube-apiserver-addons-642352" in "kube-system" namespace has status "Ready":"True"
	I0201 09:09:54.756792  961265 pod_ready.go:81] duration metric: took 8.799033ms waiting for pod "kube-apiserver-addons-642352" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:54.756809  961265 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-642352" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:54.832837  961265 pod_ready.go:92] pod "kube-controller-manager-addons-642352" in "kube-system" namespace has status "Ready":"True"
	I0201 09:09:54.832865  961265 pod_ready.go:81] duration metric: took 76.047443ms waiting for pod "kube-controller-manager-addons-642352" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:54.832882  961265 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gzzdh" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:54.838833  961265 pod_ready.go:92] pod "kube-proxy-gzzdh" in "kube-system" namespace has status "Ready":"True"
	I0201 09:09:54.838860  961265 pod_ready.go:81] duration metric: took 5.96935ms waiting for pod "kube-proxy-gzzdh" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:54.838873  961265 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-642352" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:55.133285  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:55.134228  961265 pod_ready.go:92] pod "kube-scheduler-addons-642352" in "kube-system" namespace has status "Ready":"True"
	I0201 09:09:55.134254  961265 pod_ready.go:81] duration metric: took 295.372482ms waiting for pod "kube-scheduler-addons-642352" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:55.134267  961265 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace to be "Ready" ...
	I0201 09:09:55.134676  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:55.143688  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:55.174089  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:55.568397  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:55.570631  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:55.643862  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:55.673305  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:56.068236  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:56.070107  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:56.143420  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:56.173812  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:56.568653  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:56.571205  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:56.643202  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:56.672804  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:57.067468  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:57.069821  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:57.141121  961265 pod_ready.go:102] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"False"
	I0201 09:09:57.143146  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:57.173336  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:57.572873  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:57.573578  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:57.643286  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:57.672895  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:58.068131  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:58.070015  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:58.143045  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:58.173151  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:58.638226  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:58.640332  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:58.648296  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:58.742675  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:59.068687  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:59.133605  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:59.143218  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:59.235036  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:09:59.569380  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:09:59.570838  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:09:59.641031  961265 pod_ready.go:102] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"False"
	I0201 09:09:59.643162  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:09:59.673360  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:00.068075  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:00.070504  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:00.142808  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:00.174432  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:00.568157  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:00.570918  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:00.643648  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:00.673539  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:01.068276  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:01.070506  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:01.146168  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:01.173527  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:01.568632  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:01.570309  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:01.641585  961265 pod_ready.go:102] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:01.643383  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:01.673304  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:02.067873  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:02.070523  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:02.143081  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:02.172682  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:02.568220  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:02.570229  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:02.642546  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:02.674547  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:03.068480  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:03.069741  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:03.142389  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:03.172755  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:03.567765  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:03.569869  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:03.642982  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:03.673805  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:04.069360  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:04.070270  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:04.140234  961265 pod_ready.go:102] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:04.142902  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:04.235144  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:04.568454  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:04.570377  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:04.643614  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:04.675683  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:05.067673  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:05.070115  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:05.143315  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:05.173386  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:05.567739  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:05.570444  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:05.642468  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:05.673482  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:06.068077  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:06.069731  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:06.140343  961265 pod_ready.go:102] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:06.142486  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:06.172655  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:06.567298  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:06.569489  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:06.642235  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:06.673191  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:07.067816  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:07.069663  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:07.142169  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:07.172740  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:07.567520  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:07.571601  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:07.642269  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:07.675396  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:08.067830  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:08.070607  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:08.141246  961265 pod_ready.go:102] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:08.142965  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:08.172603  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:08.567669  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:08.569829  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:08.642711  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:08.673342  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:09.069334  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:09.070567  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:09.142864  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:09.173718  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:09.568949  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:09.570246  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:09.643000  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:09.672867  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:10.067754  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:10.069957  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:10.142308  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:10.172553  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:10.567075  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:10.569736  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:10.641187  961265 pod_ready.go:102] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:10.642999  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:10.672539  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:11.068519  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:11.070285  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:11.142608  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:11.173151  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:11.567852  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:11.570384  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:11.642756  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:11.674880  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:12.069228  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:12.069917  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:12.143328  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:12.173524  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:12.568644  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:12.570653  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:12.643196  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:12.673888  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:13.068115  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:13.071743  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:13.141545  961265 pod_ready.go:102] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:13.143332  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:13.173547  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:13.568875  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:13.571131  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:13.643051  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:13.673750  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:14.068757  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:14.070032  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:14.143024  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:14.173223  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:14.568138  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:14.570834  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:14.643071  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:14.673047  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:15.067859  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:15.069905  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:15.142522  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:15.173180  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:15.568766  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:15.570080  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:15.640172  961265 pod_ready.go:102] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:15.642682  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:15.673757  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:16.068378  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:16.069459  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:16.142627  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:16.174018  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:16.568023  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:16.570371  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:16.642526  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:16.672942  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:17.068009  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:17.070183  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:17.142162  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:17.172938  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:17.568041  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:17.570752  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:17.640484  961265 pod_ready.go:102] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:17.642601  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:17.675078  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:18.067474  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:18.069972  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:18.139939  961265 pod_ready.go:92] pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace has status "Ready":"True"
	I0201 09:10:18.139964  961265 pod_ready.go:81] duration metric: took 23.005689248s waiting for pod "metrics-server-69cf46c98-4mrxs" in "kube-system" namespace to be "Ready" ...
	I0201 09:10:18.139974  961265 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-p8cwv" in "kube-system" namespace to be "Ready" ...
	I0201 09:10:18.142238  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:18.173044  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:18.567635  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:18.570076  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:18.643861  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:18.672708  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:19.067517  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:19.069861  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:19.144131  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:19.174069  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:19.569321  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:19.569723  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:19.643963  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:19.673076  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:20.068975  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:20.069592  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:20.143489  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:20.145699  961265 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8cwv" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:20.173269  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:20.568671  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0201 09:10:20.570056  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:20.643843  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:20.673524  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:21.068405  961265 kapi.go:107] duration metric: took 32.005393744s to wait for kubernetes.io/minikube-addons=registry ...
	I0201 09:10:21.070771  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:21.143427  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:21.173493  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:21.569373  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:21.643744  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:21.673343  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:22.070296  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:22.143053  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:22.172873  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:22.570317  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:22.643360  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:22.646080  961265 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8cwv" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:22.673488  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:23.069847  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:23.143397  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:23.173586  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:23.569841  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:23.643635  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:23.673606  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:24.069748  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:24.143344  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:24.173335  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:24.569682  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:24.643052  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:24.673009  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:25.070329  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:25.143355  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:25.145689  961265 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8cwv" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:25.173824  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:25.570171  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:25.644116  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:25.676957  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:26.070855  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:26.143011  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:26.173156  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:26.570320  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:26.643108  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:26.673043  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:27.071039  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:27.144234  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:27.146919  961265 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8cwv" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:27.174140  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:27.636266  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:27.644439  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:27.741343  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:28.070923  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:28.144012  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:28.234195  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:28.571010  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:28.645270  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:28.674666  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:29.071312  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:29.144799  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:29.147336  961265 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8cwv" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:29.174280  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:29.571038  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:29.643453  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:29.674743  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:30.069940  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:30.144217  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:30.175056  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:30.570632  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:30.643251  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:30.674014  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:31.070151  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:31.143963  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:31.173517  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:31.570206  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:31.643795  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:31.646540  961265 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8cwv" in "kube-system" namespace has status "Ready":"False"
	I0201 09:10:31.673358  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:32.069655  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:32.143322  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:32.173946  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:32.570569  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:32.643808  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:32.673282  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:33.070593  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:33.143175  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:33.145557  961265 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-p8cwv" in "kube-system" namespace has status "Ready":"True"
	I0201 09:10:33.145581  961265 pod_ready.go:81] duration metric: took 15.005601068s waiting for pod "nvidia-device-plugin-daemonset-p8cwv" in "kube-system" namespace to be "Ready" ...
	I0201 09:10:33.145608  961265 pod_ready.go:38] duration metric: took 41.981709701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0201 09:10:33.145628  961265 api_server.go:52] waiting for apiserver process to appear ...
	I0201 09:10:33.145688  961265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0201 09:10:33.160175  961265 api_server.go:72] duration metric: took 50.50863737s to wait for apiserver process to appear ...
	I0201 09:10:33.160201  961265 api_server.go:88] waiting for apiserver healthz status ...
	I0201 09:10:33.160222  961265 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0201 09:10:33.164769  961265 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0201 09:10:33.165838  961265 api_server.go:141] control plane version: v1.28.4
	I0201 09:10:33.165865  961265 api_server.go:131] duration metric: took 5.656371ms to wait for apiserver health ...
	I0201 09:10:33.165873  961265 system_pods.go:43] waiting for kube-system pods to appear ...
	I0201 09:10:33.172396  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:33.174141  961265 system_pods.go:59] 19 kube-system pods found
	I0201 09:10:33.174166  961265 system_pods.go:61] "coredns-5dd5756b68-97z46" [41e764bb-a62c-4fd7-8f18-f194edf1d2d2] Running
	I0201 09:10:33.174173  961265 system_pods.go:61] "csi-hostpath-attacher-0" [96c268bf-fa4c-448b-906b-909f387b0532] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0201 09:10:33.174181  961265 system_pods.go:61] "csi-hostpath-resizer-0" [3f32b7c0-196f-405c-b9f2-8ce0ae761c2a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0201 09:10:33.174193  961265 system_pods.go:61] "csi-hostpathplugin-7h7xf" [24a559af-98e5-478e-9793-ad862550295b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0201 09:10:33.174198  961265 system_pods.go:61] "etcd-addons-642352" [eebf2d48-e591-41e2-90e9-96c464002302] Running
	I0201 09:10:33.174202  961265 system_pods.go:61] "kindnet-tmjnr" [08025e21-40c7-4ac1-af80-f9c1a9e8b0f9] Running
	I0201 09:10:33.174209  961265 system_pods.go:61] "kube-apiserver-addons-642352" [9a0cb166-c0af-4db0-b54d-fd6a8b0f676b] Running
	I0201 09:10:33.174213  961265 system_pods.go:61] "kube-controller-manager-addons-642352" [3d53e85e-ecd6-43a6-ae7b-deb6b6c88479] Running
	I0201 09:10:33.174221  961265 system_pods.go:61] "kube-ingress-dns-minikube" [46dc9e1a-2137-442c-993a-921d6322672a] Running
	I0201 09:10:33.174225  961265 system_pods.go:61] "kube-proxy-gzzdh" [53b46773-35c5-412f-984e-a49b361b13e1] Running
	I0201 09:10:33.174231  961265 system_pods.go:61] "kube-scheduler-addons-642352" [2e89d318-279a-42f3-92c1-2c694dd8ca02] Running
	I0201 09:10:33.174235  961265 system_pods.go:61] "metrics-server-69cf46c98-4mrxs" [d1e198e3-e716-4091-b76d-458a065b8206] Running
	I0201 09:10:33.174241  961265 system_pods.go:61] "nvidia-device-plugin-daemonset-p8cwv" [e416cbdf-6552-406b-8891-00782080893a] Running
	I0201 09:10:33.174245  961265 system_pods.go:61] "registry-proxy-tsccz" [5512787a-dabe-4019-aead-c68f8a431ce8] Running
	I0201 09:10:33.174251  961265 system_pods.go:61] "registry-s2xzz" [4dea1280-3766-46b5-b712-24e29ff33b38] Running
	I0201 09:10:33.174256  961265 system_pods.go:61] "snapshot-controller-58dbcc7b99-2qmjj" [0928963d-226a-4e66-b7ae-752f13b44c3b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0201 09:10:33.174263  961265 system_pods.go:61] "snapshot-controller-58dbcc7b99-t8bsp" [888cb879-39ef-4b8e-ae14-dca7f4cab0bb] Running
	I0201 09:10:33.174267  961265 system_pods.go:61] "storage-provisioner" [0153e690-ed67-497c-ad47-0666208720b1] Running
	I0201 09:10:33.174272  961265 system_pods.go:61] "tiller-deploy-7b677967b9-q2zb9" [3d55d479-092d-4cdb-9344-110276a11056] Running
	I0201 09:10:33.174278  961265 system_pods.go:74] duration metric: took 8.398838ms to wait for pod list to return data ...
	I0201 09:10:33.174287  961265 default_sa.go:34] waiting for default service account to be created ...
	I0201 09:10:33.176213  961265 default_sa.go:45] found service account: "default"
	I0201 09:10:33.176231  961265 default_sa.go:55] duration metric: took 1.936428ms for default service account to be created ...
	I0201 09:10:33.176238  961265 system_pods.go:116] waiting for k8s-apps to be running ...
	I0201 09:10:33.183773  961265 system_pods.go:86] 19 kube-system pods found
	I0201 09:10:33.183807  961265 system_pods.go:89] "coredns-5dd5756b68-97z46" [41e764bb-a62c-4fd7-8f18-f194edf1d2d2] Running
	I0201 09:10:33.183825  961265 system_pods.go:89] "csi-hostpath-attacher-0" [96c268bf-fa4c-448b-906b-909f387b0532] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0201 09:10:33.183834  961265 system_pods.go:89] "csi-hostpath-resizer-0" [3f32b7c0-196f-405c-b9f2-8ce0ae761c2a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0201 09:10:33.183846  961265 system_pods.go:89] "csi-hostpathplugin-7h7xf" [24a559af-98e5-478e-9793-ad862550295b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0201 09:10:33.183858  961265 system_pods.go:89] "etcd-addons-642352" [eebf2d48-e591-41e2-90e9-96c464002302] Running
	I0201 09:10:33.183867  961265 system_pods.go:89] "kindnet-tmjnr" [08025e21-40c7-4ac1-af80-f9c1a9e8b0f9] Running
	I0201 09:10:33.183878  961265 system_pods.go:89] "kube-apiserver-addons-642352" [9a0cb166-c0af-4db0-b54d-fd6a8b0f676b] Running
	I0201 09:10:33.183887  961265 system_pods.go:89] "kube-controller-manager-addons-642352" [3d53e85e-ecd6-43a6-ae7b-deb6b6c88479] Running
	I0201 09:10:33.183898  961265 system_pods.go:89] "kube-ingress-dns-minikube" [46dc9e1a-2137-442c-993a-921d6322672a] Running
	I0201 09:10:33.183908  961265 system_pods.go:89] "kube-proxy-gzzdh" [53b46773-35c5-412f-984e-a49b361b13e1] Running
	I0201 09:10:33.183916  961265 system_pods.go:89] "kube-scheduler-addons-642352" [2e89d318-279a-42f3-92c1-2c694dd8ca02] Running
	I0201 09:10:33.183926  961265 system_pods.go:89] "metrics-server-69cf46c98-4mrxs" [d1e198e3-e716-4091-b76d-458a065b8206] Running
	I0201 09:10:33.183937  961265 system_pods.go:89] "nvidia-device-plugin-daemonset-p8cwv" [e416cbdf-6552-406b-8891-00782080893a] Running
	I0201 09:10:33.183946  961265 system_pods.go:89] "registry-proxy-tsccz" [5512787a-dabe-4019-aead-c68f8a431ce8] Running
	I0201 09:10:33.183954  961265 system_pods.go:89] "registry-s2xzz" [4dea1280-3766-46b5-b712-24e29ff33b38] Running
	I0201 09:10:33.183967  961265 system_pods.go:89] "snapshot-controller-58dbcc7b99-2qmjj" [0928963d-226a-4e66-b7ae-752f13b44c3b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0201 09:10:33.183977  961265 system_pods.go:89] "snapshot-controller-58dbcc7b99-t8bsp" [888cb879-39ef-4b8e-ae14-dca7f4cab0bb] Running
	I0201 09:10:33.183988  961265 system_pods.go:89] "storage-provisioner" [0153e690-ed67-497c-ad47-0666208720b1] Running
	I0201 09:10:33.183995  961265 system_pods.go:89] "tiller-deploy-7b677967b9-q2zb9" [3d55d479-092d-4cdb-9344-110276a11056] Running
	I0201 09:10:33.184011  961265 system_pods.go:126] duration metric: took 7.762385ms to wait for k8s-apps to be running ...
	I0201 09:10:33.184024  961265 system_svc.go:44] waiting for kubelet service to be running ....
	I0201 09:10:33.184084  961265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0201 09:10:33.195481  961265 system_svc.go:56] duration metric: took 11.449535ms WaitForService to wait for kubelet.
	I0201 09:10:33.195511  961265 kubeadm.go:581] duration metric: took 50.543977334s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0201 09:10:33.195543  961265 node_conditions.go:102] verifying NodePressure condition ...
	I0201 09:10:33.198576  961265 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0201 09:10:33.198609  961265 node_conditions.go:123] node cpu capacity is 8
	I0201 09:10:33.198623  961265 node_conditions.go:105] duration metric: took 3.074639ms to run NodePressure ...
	I0201 09:10:33.198639  961265 start.go:228] waiting for startup goroutines ...
	I0201 09:10:33.570667  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:33.643263  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:33.672946  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:34.070150  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:34.143765  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:34.173229  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:34.570089  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:34.643485  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:34.673247  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:35.070841  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:35.143302  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:35.172938  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:35.570758  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:35.643935  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:35.674113  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:36.071215  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:36.144595  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:36.174016  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:36.570259  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:36.643524  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:36.673564  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:37.069759  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:37.143648  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:37.173848  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:37.570114  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:37.643869  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:37.675432  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:38.070268  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:38.143826  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:38.173645  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:38.570288  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:38.643767  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:38.673255  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:39.070044  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:39.143972  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:39.173980  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:39.570587  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:39.643811  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:39.673555  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:40.069982  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:40.143791  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:40.173925  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:40.569918  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:40.643635  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:40.673005  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:41.070437  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:41.143299  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:41.174359  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:41.570847  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:41.643622  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:41.674802  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:42.070608  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:42.143782  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:42.173459  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:42.571297  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:42.644209  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:42.673539  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:43.070739  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:43.143287  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:43.174920  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:43.570204  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:43.643605  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:43.673256  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:44.072704  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:44.142795  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:44.173411  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:44.570894  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:44.643433  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:44.673731  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:45.069555  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:45.143616  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:45.175263  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:45.570783  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:45.643782  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:45.673322  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:46.071090  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:46.143483  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:46.173236  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:46.570480  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:46.642905  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:46.672443  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:47.070134  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:47.144332  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:47.173128  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:47.570287  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:47.643758  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:47.739119  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:48.142669  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:48.145856  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:48.238593  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:48.642019  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:48.645562  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:48.734962  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:49.143850  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:49.154272  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:49.239630  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:49.634366  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:49.644994  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:49.737020  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:50.071273  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:50.144261  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:50.173458  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:50.570104  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:50.644574  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:50.674123  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:51.070116  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:51.143792  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:51.174046  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:51.570319  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:51.643362  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:51.673279  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:52.070945  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:52.144625  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:52.173782  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:52.570237  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:52.644681  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:52.674073  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:53.070500  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:53.143218  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:53.173396  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:53.570548  961265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0201 09:10:53.643050  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:53.672606  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:54.070845  961265 kapi.go:107] duration metric: took 1m5.012922053s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0201 09:10:54.143308  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:54.173477  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:54.643228  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:54.672931  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:55.144108  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:55.173366  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:55.643810  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:55.676846  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:56.144185  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:56.173206  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:56.644110  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:56.673190  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:57.143750  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:57.173864  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:57.644226  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:57.673932  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:58.143621  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:58.173579  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:58.643715  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:58.673511  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:59.143404  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:59.173107  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:10:59.643079  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:10:59.673064  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:11:00.143318  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:11:00.172971  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:11:00.644066  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:11:00.672604  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:11:01.144012  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:11:01.173271  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:11:01.643349  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0201 09:11:01.673604  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:11:02.143783  961265 kapi.go:107] duration metric: took 1m9.004337806s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0201 09:11:02.145556  961265 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-642352 cluster.
	I0201 09:11:02.147293  961265 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0201 09:11:02.148601  961265 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0201 09:11:02.174098  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:11:02.672628  961265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0201 09:11:03.174514  961265 kapi.go:107] duration metric: took 1m12.009813779s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0201 09:11:03.188541  961265 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, cloud-spanner, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0201 09:11:03.193154  961265 addons.go:505] enable addons completed in 1m21.074780834s: enabled=[storage-provisioner nvidia-device-plugin ingress-dns cloud-spanner metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0201 09:11:03.193234  961265 start.go:233] waiting for cluster config update ...
	I0201 09:11:03.193263  961265 start.go:242] writing updated cluster config ...
	I0201 09:11:03.193553  961265 ssh_runner.go:195] Run: rm -f paused
	I0201 09:11:03.252732  961265 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0201 09:11:03.308395  961265 out.go:177] * Done! kubectl is now configured to use "addons-642352" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 01 09:13:36 addons-642352 crio[948]: time="2024-02-01 09:13:36.386191787Z" level=info msg="Removing container: a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f" id=1f7dcfa1-2bd2-4e4e-b746-a1c0c7ffbe7d name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 01 09:13:36 addons-642352 crio[948]: time="2024-02-01 09:13:36.400113163Z" level=info msg="Removed container a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=1f7dcfa1-2bd2-4e4e-b746-a1c0c7ffbe7d name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 01 09:13:37 addons-642352 crio[948]: time="2024-02-01 09:13:37.867129577Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=c13ecd9d-7d85-4d68-9179-e1a5c821efa6 name=/runtime.v1.ImageService/PullImage
	Feb 01 09:13:37 addons-642352 crio[948]: time="2024-02-01 09:13:37.868174777Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=f97447e9-0c88-473f-8772-51c6bf194f2a name=/runtime.v1.ImageService/ImageStatus
	Feb 01 09:13:37 addons-642352 crio[948]: time="2024-02-01 09:13:37.869152573Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=f97447e9-0c88-473f-8772-51c6bf194f2a name=/runtime.v1.ImageService/ImageStatus
	Feb 01 09:13:37 addons-642352 crio[948]: time="2024-02-01 09:13:37.870063099Z" level=info msg="Creating container: default/hello-world-app-5d77478584-gtfbn/hello-world-app" id=501cdf47-a252-483a-a20e-a10a4c35a2b1 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 01 09:13:37 addons-642352 crio[948]: time="2024-02-01 09:13:37.870180633Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 01 09:13:37 addons-642352 crio[948]: time="2024-02-01 09:13:37.923192699Z" level=info msg="Created container 347b77d2a14a6979a39f9564ffad433f76db36ec57884c497a820308a3213516: default/hello-world-app-5d77478584-gtfbn/hello-world-app" id=501cdf47-a252-483a-a20e-a10a4c35a2b1 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 01 09:13:37 addons-642352 crio[948]: time="2024-02-01 09:13:37.923873081Z" level=info msg="Starting container: 347b77d2a14a6979a39f9564ffad433f76db36ec57884c497a820308a3213516" id=cbb20a3d-f248-499d-8261-e0649f00af21 name=/runtime.v1.RuntimeService/StartContainer
	Feb 01 09:13:37 addons-642352 crio[948]: time="2024-02-01 09:13:37.930866989Z" level=info msg="Started container" PID=9937 containerID=347b77d2a14a6979a39f9564ffad433f76db36ec57884c497a820308a3213516 description=default/hello-world-app-5d77478584-gtfbn/hello-world-app id=cbb20a3d-f248-499d-8261-e0649f00af21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d9aebf8b2e08c5572124f0f53f61f2a4e6927e5b9bbd2a1c1a33c9004617ab9
	Feb 01 09:13:38 addons-642352 crio[948]: time="2024-02-01 09:13:38.291128854Z" level=info msg="Stopping container: a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0 (timeout: 2s)" id=95defedc-3c6e-47c9-a48e-33c895b5fcf5 name=/runtime.v1.RuntimeService/StopContainer
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.298253084Z" level=warning msg="Stopping container a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=95defedc-3c6e-47c9-a48e-33c895b5fcf5 name=/runtime.v1.RuntimeService/StopContainer
	Feb 01 09:13:40 addons-642352 conmon[5623]: conmon a54df7cee08c7b932964 <ninfo>: container 5635 exited with status 137
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.430758282Z" level=info msg="Stopped container a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0: ingress-nginx/ingress-nginx-controller-69cff4fd79-wjblf/controller" id=95defedc-3c6e-47c9-a48e-33c895b5fcf5 name=/runtime.v1.RuntimeService/StopContainer
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.431386788Z" level=info msg="Stopping pod sandbox: b1649949adeaff9255205a7be6424dba8cd4e188a073ca72a72faae82b83b00f" id=b7a240d9-741d-400a-a790-7ef7f9298258 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.434524516Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-C3HLGNYLL3XREXHI - [0:0]\n:KUBE-HP-MT2XKA3F5EUNWKL2 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-C3HLGNYLL3XREXHI\n-X KUBE-HP-MT2XKA3F5EUNWKL2\nCOMMIT\n"
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.435953658Z" level=info msg="Closing host port tcp:80"
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.436000796Z" level=info msg="Closing host port tcp:443"
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.437491331Z" level=info msg="Host port tcp:80 does not have an open socket"
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.437513882Z" level=info msg="Host port tcp:443 does not have an open socket"
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.437658270Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-wjblf Namespace:ingress-nginx ID:b1649949adeaff9255205a7be6424dba8cd4e188a073ca72a72faae82b83b00f UID:cc254d8c-527a-49b3-8571-5f674630e01b NetNS:/var/run/netns/e092536f-f4ec-44b0-8531-7906092d23b5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.437773908Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-wjblf from CNI network \"kindnet\" (type=ptp)"
	Feb 01 09:13:40 addons-642352 crio[948]: time="2024-02-01 09:13:40.464137761Z" level=info msg="Stopped pod sandbox: b1649949adeaff9255205a7be6424dba8cd4e188a073ca72a72faae82b83b00f" id=b7a240d9-741d-400a-a790-7ef7f9298258 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 01 09:13:41 addons-642352 crio[948]: time="2024-02-01 09:13:41.402445581Z" level=info msg="Removing container: a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0" id=d0eca928-e229-4e43-ba13-7457b97a2fbb name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 01 09:13:41 addons-642352 crio[948]: time="2024-02-01 09:13:41.416178699Z" level=info msg="Removed container a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0: ingress-nginx/ingress-nginx-controller-69cff4fd79-wjblf/controller" id=d0eca928-e229-4e43-ba13-7457b97a2fbb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	347b77d2a14a6       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   4d9aebf8b2e08       hello-world-app-5d77478584-gtfbn
	b5de33697084d       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   a7284d8df0deb       headlamp-7ddfbb94ff-r8fgz
	46c0a8ea0fcda       docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da                              2 minutes ago       Running             nginx                     0                   a672d31333168       nginx
	86d3f36b801a3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   568ec81f16e6e       gcp-auth-d4c87556c-spt5m
	d626686bd4bea       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   b0a2702e24675       ingress-nginx-admission-patch-qjn58
	3aca81a96d9b6       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   1dd7012db7778       yakd-dashboard-9947fc6bf-sh5hc
	656023fe3762d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   c5becef116afe       ingress-nginx-admission-create-tkjz7
	cb13d265fb782       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   296fc67044096       coredns-5dd5756b68-97z46
	dba6c9646ec96       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   6e0ad141a0a31       storage-provisioner
	6a9b051d431b7       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   ad647137b93da       kube-proxy-gzzdh
	eb07b25a2d8b0       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   da184db6edba4       kindnet-tmjnr
	7949a0c4d0897       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   f34a6b9ff244c       kube-controller-manager-addons-642352
	4b243027fc21a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   34e3ea3b971e1       kube-apiserver-addons-642352
	beec74884138d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   0aa72274b02fa       etcd-addons-642352
	53d39c795f697       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   57761897b1c5d       kube-scheduler-addons-642352
	
	
	==> coredns [cb13d265fb7823c84d4e8283453786725591f489ac76c41ce95425908fe2b609] <==
	[INFO] 10.244.0.5:41901 - 3106 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088023s
	[INFO] 10.244.0.5:54830 - 51546 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.017639334s
	[INFO] 10.244.0.5:54830 - 16984 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.0178536s
	[INFO] 10.244.0.5:33409 - 40436 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00405879s
	[INFO] 10.244.0.5:33409 - 6640 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006100739s
	[INFO] 10.244.0.5:45619 - 46557 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004198021s
	[INFO] 10.244.0.5:45619 - 56785 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004575804s
	[INFO] 10.244.0.5:34675 - 55844 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090428s
	[INFO] 10.244.0.5:34675 - 15656 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000139578s
	[INFO] 10.244.0.21:51216 - 49778 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000240687s
	[INFO] 10.244.0.21:49044 - 39579 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000346888s
	[INFO] 10.244.0.21:34464 - 15548 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130687s
	[INFO] 10.244.0.21:45942 - 8445 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000162545s
	[INFO] 10.244.0.21:43096 - 53856 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123833s
	[INFO] 10.244.0.21:44422 - 22683 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001249s
	[INFO] 10.244.0.21:43059 - 34072 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007032667s
	[INFO] 10.244.0.21:34827 - 45320 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007427969s
	[INFO] 10.244.0.21:60004 - 21183 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006951376s
	[INFO] 10.244.0.21:37730 - 60854 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007389154s
	[INFO] 10.244.0.21:60905 - 31477 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005598274s
	[INFO] 10.244.0.21:48479 - 13762 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005956682s
	[INFO] 10.244.0.21:49662 - 9655 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.000765978s
	[INFO] 10.244.0.21:60627 - 43484 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.000764124s
	[INFO] 10.244.0.23:34852 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000178908s
	[INFO] 10.244.0.23:47272 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147122s
	
	
	==> describe nodes <==
	Name:               addons-642352
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-642352
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=addons-642352
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_01T09_09_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-642352
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 01 Feb 2024 09:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-642352
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 01 Feb 2024 09:13:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 01 Feb 2024 09:12:32 +0000   Thu, 01 Feb 2024 09:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 01 Feb 2024 09:12:32 +0000   Thu, 01 Feb 2024 09:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 01 Feb 2024 09:12:32 +0000   Thu, 01 Feb 2024 09:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 01 Feb 2024 09:12:32 +0000   Thu, 01 Feb 2024 09:09:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-642352
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1f9f735de744badb7d7784b0c83c999
	  System UUID:                8c0c6ac3-e918-4ce9-aba0-88ccb0d38e3a
	  Boot ID:                    2cfa37ec-936f-4f6f-8415-4c1cf32697e8
	  Kernel Version:             5.15.0-1049-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-gtfbn         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-d4c87556c-spt5m                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  headlamp                    headlamp-7ddfbb94ff-r8fgz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 coredns-5dd5756b68-97z46                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m3s
	  kube-system                 etcd-addons-642352                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kindnet-tmjnr                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m4s
	  kube-system                 kube-apiserver-addons-642352             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-controller-manager-addons-642352    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-gzzdh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-addons-642352             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-sh5hc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m58s  kube-proxy       
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m16s  kubelet          Node addons-642352 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s  kubelet          Node addons-642352 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s  kubelet          Node addons-642352 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m4s   node-controller  Node addons-642352 event: Registered Node addons-642352 in Controller
	  Normal  NodeReady                3m55s  kubelet          Node addons-642352 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: 02 42 61 50 e0 51 02 42 c0 a8 5e 02 08 00
	[  +6.651458] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c4cb2b33b568
	[  +0.000006] ll header: 00000000: 02 42 61 50 e0 51 02 42 c0 a8 5e 02 08 00
	[  +4.867689] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-f771dc58cf4e
	[  +0.000006] ll header: 00000000: 02 42 7d 83 19 b2 02 42 c0 a8 4c 02 08 00
	[  +8.443392] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c4cb2b33b568
	[  +0.000007] ll header: 00000000: 02 42 61 50 e0 51 02 42 c0 a8 5e 02 08 00
	[  +3.839713] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-31cd09de568c
	[  +0.000006] ll header: 00000000: 02 42 c7 d7 50 77 02 42 c0 a8 55 02 08 00
	[  +0.000026] IPv4: martian source 10.244.0.5 from 10.96.0.1, on dev br-31cd09de568c
	[  +0.000005] ll header: 00000000: 02 42 c7 d7 50 77 02 42 c0 a8 55 02 08 00
	[Feb 1 09:11] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a 0e 69 7c fe 07 12 e6 eb b8 95 32 08 00
	[  +1.015423] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a 0e 69 7c fe 07 12 e6 eb b8 95 32 08 00
	[  +2.011906] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 8a 0e 69 7c fe 07 12 e6 eb b8 95 32 08 00
	[  +4.227598] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a 0e 69 7c fe 07 12 e6 eb b8 95 32 08 00
	[  +8.187454] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a 0e 69 7c fe 07 12 e6 eb b8 95 32 08 00
	[ +16.126900] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a 0e 69 7c fe 07 12 e6 eb b8 95 32 08 00
	[Feb 1 09:12] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 8a 0e 69 7c fe 07 12 e6 eb b8 95 32 08 00
	
	
	==> etcd [beec74884138db779f07ac3c265d22c7f518d12c984ba722d66f0a9269a83455] <==
	{"level":"warn","ts":"2024-02-01T09:09:45.238551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.321487ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-tmjnr\" ","response":"range_response_count:1 size:4698"}
	{"level":"info","ts":"2024-02-01T09:09:45.243734Z","caller":"traceutil/trace.go:171","msg":"trace[525081933] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-tmjnr; range_end:; response_count:1; response_revision:385; }","duration":"195.512587ms","start":"2024-02-01T09:09:45.048203Z","end":"2024-02-01T09:09:45.243716Z","steps":["trace[525081933] 'agreement among raft nodes before linearized reading'  (duration: 186.429427ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-01T09:09:45.244208Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.480173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-01T09:09:45.244303Z","caller":"traceutil/trace.go:171","msg":"trace[1124320874] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:385; }","duration":"213.184579ms","start":"2024-02-01T09:09:45.031106Z","end":"2024-02-01T09:09:45.244291Z","steps":["trace[1124320874] 'agreement among raft nodes before linearized reading'  (duration: 203.448507ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-01T09:09:45.834598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.136233ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128026887832517508 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-tmjnr\" mod_revision:331 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-tmjnr\" value_size:4622 >> failure:<request_range:<key:\"/registry/pods/kube-system/kindnet-tmjnr\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-02-01T09:09:45.835524Z","caller":"traceutil/trace.go:171","msg":"trace[1648602944] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"190.210701ms","start":"2024-02-01T09:09:45.645298Z","end":"2024-02-01T09:09:45.835509Z","steps":["trace[1648602944] 'process raft request'  (duration: 190.086808ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-01T09:09:45.83589Z","caller":"traceutil/trace.go:171","msg":"trace[579062441] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"192.780308ms","start":"2024-02-01T09:09:45.643094Z","end":"2024-02-01T09:09:45.835874Z","steps":["trace[579062441] 'process raft request'  (duration: 87.568243ms)","trace[579062441] 'compare'  (duration: 102.802878ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-01T09:09:46.142634Z","caller":"traceutil/trace.go:171","msg":"trace[1529098278] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"101.320222ms","start":"2024-02-01T09:09:46.041299Z","end":"2024-02-01T09:09:46.142619Z","steps":[],"step_count":0}
	{"level":"info","ts":"2024-02-01T09:09:46.143017Z","caller":"traceutil/trace.go:171","msg":"trace[2078178562] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"101.403853ms","start":"2024-02-01T09:09:46.04159Z","end":"2024-02-01T09:09:46.142993Z","steps":["trace[2078178562] 'process raft request'  (duration: 100.957318ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-01T09:09:46.543063Z","caller":"traceutil/trace.go:171","msg":"trace[353493847] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"102.143208ms","start":"2024-02-01T09:09:46.440901Z","end":"2024-02-01T09:09:46.543044Z","steps":["trace[353493847] 'process raft request'  (duration: 97.105134ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-01T09:09:46.543216Z","caller":"traceutil/trace.go:171","msg":"trace[581074164] linearizableReadLoop","detail":"{readStateIndex:417; appliedIndex:416; }","duration":"100.509541ms","start":"2024-02-01T09:09:46.442698Z","end":"2024-02-01T09:09:46.543207Z","steps":["trace[581074164] 'read index received'  (duration: 95.742696ms)","trace[581074164] 'applied index is now lower than readState.Index'  (duration: 4.76603ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-01T09:09:46.543291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.602564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-01T09:09:46.543319Z","caller":"traceutil/trace.go:171","msg":"trace[411364866] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:0; response_revision:406; }","duration":"100.645961ms","start":"2024-02-01T09:09:46.442666Z","end":"2024-02-01T09:09:46.543312Z","steps":["trace[411364866] 'agreement among raft nodes before linearized reading'  (duration: 100.581921ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-01T09:09:46.543578Z","caller":"traceutil/trace.go:171","msg":"trace[757647246] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"100.406921ms","start":"2024-02-01T09:09:46.443162Z","end":"2024-02-01T09:09:46.543569Z","steps":["trace[757647246] 'process raft request'  (duration: 99.519539ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-01T09:09:46.750851Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.257618ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-01T09:09:46.830651Z","caller":"traceutil/trace.go:171","msg":"trace[572632252] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:416; }","duration":"181.049058ms","start":"2024-02-01T09:09:46.649563Z","end":"2024-02-01T09:09:46.830612Z","steps":["trace[572632252] 'agreement among raft nodes before linearized reading'  (duration: 101.205179ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-01T09:09:46.843386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.520199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3455"}
	{"level":"info","ts":"2024-02-01T09:09:46.843573Z","caller":"traceutil/trace.go:171","msg":"trace[1059453143] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:419; }","duration":"193.718627ms","start":"2024-02-01T09:09:46.649837Z","end":"2024-02-01T09:09:46.843555Z","steps":["trace[1059453143] 'agreement among raft nodes before linearized reading'  (duration: 193.461223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-01T09:09:46.843816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.130481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-02-01T09:09:46.843916Z","caller":"traceutil/trace.go:171","msg":"trace[595456428] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:419; }","duration":"194.23581ms","start":"2024-02-01T09:09:46.64967Z","end":"2024-02-01T09:09:46.843906Z","steps":["trace[595456428] 'agreement among raft nodes before linearized reading'  (duration: 194.085681ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-01T09:09:46.844108Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.450586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-01T09:09:46.844173Z","caller":"traceutil/trace.go:171","msg":"trace[635167479] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:419; }","duration":"194.517668ms","start":"2024-02-01T09:09:46.649648Z","end":"2024-02-01T09:09:46.844166Z","steps":["trace[635167479] 'agreement among raft nodes before linearized reading'  (duration: 194.433127ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-01T09:11:09.145117Z","caller":"traceutil/trace.go:171","msg":"trace[1315432883] transaction","detail":"{read_only:false; response_revision:1207; number_of_response:1; }","duration":"200.237708ms","start":"2024-02-01T09:11:08.944847Z","end":"2024-02-01T09:11:09.145085Z","steps":["trace[1315432883] 'process raft request'  (duration: 117.141702ms)","trace[1315432883] 'compare'  (duration: 82.910917ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-01T09:11:25.754011Z","caller":"traceutil/trace.go:171","msg":"trace[881509867] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1345; }","duration":"120.851077ms","start":"2024-02-01T09:11:25.633139Z","end":"2024-02-01T09:11:25.75399Z","steps":["trace[881509867] 'process raft request'  (duration: 120.682673ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-01T09:11:25.796854Z","caller":"traceutil/trace.go:171","msg":"trace[1379402890] transaction","detail":"{read_only:false; response_revision:1346; number_of_response:1; }","duration":"163.536174ms","start":"2024-02-01T09:11:25.633298Z","end":"2024-02-01T09:11:25.796834Z","steps":["trace[1379402890] 'process raft request'  (duration: 163.391077ms)"],"step_count":1}
	
	
	==> gcp-auth [86d3f36b801a3a87d3e8da4a887460aa1d44646e2d0f6db2b98d7e3f48f43098] <==
	2024/02/01 09:11:01 GCP Auth Webhook started!
	2024/02/01 09:11:13 Ready to marshal response ...
	2024/02/01 09:11:13 Ready to write response ...
	2024/02/01 09:11:13 Ready to marshal response ...
	2024/02/01 09:11:13 Ready to write response ...
	2024/02/01 09:11:23 Ready to marshal response ...
	2024/02/01 09:11:23 Ready to write response ...
	2024/02/01 09:11:23 Ready to marshal response ...
	2024/02/01 09:11:23 Ready to write response ...
	2024/02/01 09:11:32 Ready to marshal response ...
	2024/02/01 09:11:32 Ready to write response ...
	2024/02/01 09:11:32 Ready to marshal response ...
	2024/02/01 09:11:32 Ready to write response ...
	2024/02/01 09:11:32 Ready to marshal response ...
	2024/02/01 09:11:32 Ready to write response ...
	2024/02/01 09:11:34 Ready to marshal response ...
	2024/02/01 09:11:34 Ready to write response ...
	2024/02/01 09:11:49 Ready to marshal response ...
	2024/02/01 09:11:49 Ready to write response ...
	2024/02/01 09:12:03 Ready to marshal response ...
	2024/02/01 09:12:03 Ready to write response ...
	2024/02/01 09:12:23 Ready to marshal response ...
	2024/02/01 09:12:23 Ready to write response ...
	2024/02/01 09:13:35 Ready to marshal response ...
	2024/02/01 09:13:35 Ready to write response ...
	
	
	==> kernel <==
	 09:13:45 up 15:56,  0 users,  load average: 0.26, 0.77, 1.44
	Linux addons-642352 5.15.0-1049-gcp #57~20.04.1-Ubuntu SMP Wed Jan 17 16:04:23 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [eb07b25a2d8b02843d1d94db95fb299db42db259c6ad937a2d56a9fd3cebb0c0] <==
	I0201 09:11:40.479291       1 main.go:227] handling current node
	I0201 09:11:50.488447       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:11:50.488481       1 main.go:227] handling current node
	I0201 09:12:00.496526       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:12:00.496555       1 main.go:227] handling current node
	I0201 09:12:10.500977       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:12:10.501003       1 main.go:227] handling current node
	I0201 09:12:20.513904       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:12:20.513929       1 main.go:227] handling current node
	I0201 09:12:30.526771       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:12:30.526803       1 main.go:227] handling current node
	I0201 09:12:40.531145       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:12:40.531172       1 main.go:227] handling current node
	I0201 09:12:50.539277       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:12:50.539307       1 main.go:227] handling current node
	I0201 09:13:00.544820       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:13:00.544849       1 main.go:227] handling current node
	I0201 09:13:10.554046       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:13:10.554074       1 main.go:227] handling current node
	I0201 09:13:20.565666       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:13:20.565692       1 main.go:227] handling current node
	I0201 09:13:30.577774       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:13:30.577802       1 main.go:227] handling current node
	I0201 09:13:40.589764       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:13:40.589788       1 main.go:227] handling current node
	
	
	==> kube-apiserver [4b243027fc21a6f0805393d17d2db16d653d062a55ddf49a14fa041da227041d] <==
	I0201 09:11:26.183516       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0201 09:11:32.530280       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.114.236"}
	E0201 09:11:51.742333       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0201 09:11:53.310330       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:38090: read: connection reset by peer
	I0201 09:12:17.797495       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0201 09:12:18.688696       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0201 09:12:41.003413       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0201 09:12:41.003470       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0201 09:12:41.010492       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0201 09:12:41.010555       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0201 09:12:41.017719       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0201 09:12:41.017890       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0201 09:12:41.018797       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0201 09:12:41.018898       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0201 09:12:41.031286       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0201 09:12:41.031429       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0201 09:12:41.035644       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0201 09:12:41.035804       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0201 09:12:41.047615       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0201 09:12:41.049563       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0201 09:12:41.049595       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	W0201 09:12:42.019536       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0201 09:12:42.048812       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0201 09:12:42.058139       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0201 09:13:35.400490       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.86.134"}
	
	
	==> kube-controller-manager [7949a0c4d089777eb1fa2c8a928c705268156eb66fe00b01925a2678bef573aa] <==
	W0201 09:12:56.482301       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0201 09:12:56.482334       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0201 09:12:59.649693       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0201 09:12:59.649728       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0201 09:12:59.808059       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0201 09:12:59.808093       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0201 09:13:03.452125       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0201 09:13:03.452161       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0201 09:13:14.500201       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0201 09:13:14.500245       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0201 09:13:14.817854       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0201 09:13:14.817890       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0201 09:13:22.463227       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0201 09:13:22.463261       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0201 09:13:35.234619       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0201 09:13:35.245314       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-gtfbn"
	I0201 09:13:35.251068       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.758439ms"
	I0201 09:13:35.256272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.147387ms"
	I0201 09:13:35.256399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.864µs"
	I0201 09:13:35.262304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.617µs"
	I0201 09:13:37.280350       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0201 09:13:37.280429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="7.582µs"
	I0201 09:13:37.283427       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0201 09:13:38.408664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.871619ms"
	I0201 09:13:38.408759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.269µs"
	
	
	==> kube-proxy [6a9b051d431b740b081c01f29a7f09791df155928bb79bb10ee69b8155621160] <==
	I0201 09:09:45.235753       1 server_others.go:69] "Using iptables proxy"
	I0201 09:09:45.453494       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0201 09:09:47.051827       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0201 09:09:47.237586       1 server_others.go:152] "Using iptables Proxier"
	I0201 09:09:47.237665       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0201 09:09:47.237678       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0201 09:09:47.237720       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0201 09:09:47.238008       1 server.go:846] "Version info" version="v1.28.4"
	I0201 09:09:47.238036       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0201 09:09:47.239046       1 config.go:188] "Starting service config controller"
	I0201 09:09:47.239132       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0201 09:09:47.239194       1 config.go:97] "Starting endpoint slice config controller"
	I0201 09:09:47.239223       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0201 09:09:47.239821       1 config.go:315] "Starting node config controller"
	I0201 09:09:47.239877       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0201 09:09:47.344823       1 shared_informer.go:318] Caches are synced for node config
	I0201 09:09:47.344860       1 shared_informer.go:318] Caches are synced for service config
	I0201 09:09:47.344890       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [53d39c795f697fdb588a98ca700e72d88126bf79f76af820ba43f2e885a71258] <==
	W0201 09:09:26.357989       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0201 09:09:26.358006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0201 09:09:26.358066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0201 09:09:26.358081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0201 09:09:26.358145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0201 09:09:26.358170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0201 09:09:26.358330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0201 09:09:26.358355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0201 09:09:27.271964       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0201 09:09:27.271995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0201 09:09:27.281451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0201 09:09:27.281490       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0201 09:09:27.312920       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0201 09:09:27.312960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0201 09:09:27.324394       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0201 09:09:27.324431       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0201 09:09:27.339690       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0201 09:09:27.339726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0201 09:09:27.396851       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0201 09:09:27.396885       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0201 09:09:27.435852       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0201 09:09:27.435890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0201 09:09:27.611792       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0201 09:09:27.611827       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0201 09:09:29.450861       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 01 09:13:35 addons-642352 kubelet[1551]: I0201 09:13:35.443032    1551 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mh9w\" (UniqueName: \"kubernetes.io/projected/55e16530-1b4f-4351-99cc-a2cce79bbc11-kube-api-access-9mh9w\") pod \"hello-world-app-5d77478584-gtfbn\" (UID: \"55e16530-1b4f-4351-99cc-a2cce79bbc11\") " pod="default/hello-world-app-5d77478584-gtfbn"
	Feb 01 09:13:35 addons-642352 kubelet[1551]: I0201 09:13:35.443109    1551 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/55e16530-1b4f-4351-99cc-a2cce79bbc11-gcp-creds\") pod \"hello-world-app-5d77478584-gtfbn\" (UID: \"55e16530-1b4f-4351-99cc-a2cce79bbc11\") " pod="default/hello-world-app-5d77478584-gtfbn"
	Feb 01 09:13:35 addons-642352 kubelet[1551]: W0201 09:13:35.888208    1551 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ba9aca09f642738d1e391d3fcd2462426a7803a0e2d60cc2f60823541ed64bf0/crio-4d9aebf8b2e08c5572124f0f53f61f2a4e6927e5b9bbd2a1c1a33c9004617ab9 WatchSource:0}: Error finding container 4d9aebf8b2e08c5572124f0f53f61f2a4e6927e5b9bbd2a1c1a33c9004617ab9: Status 404 returned error can't find the container with id 4d9aebf8b2e08c5572124f0f53f61f2a4e6927e5b9bbd2a1c1a33c9004617ab9
	Feb 01 09:13:36 addons-642352 kubelet[1551]: I0201 09:13:36.385028    1551 scope.go:117] "RemoveContainer" containerID="a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f"
	Feb 01 09:13:36 addons-642352 kubelet[1551]: I0201 09:13:36.400407    1551 scope.go:117] "RemoveContainer" containerID="a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f"
	Feb 01 09:13:36 addons-642352 kubelet[1551]: E0201 09:13:36.400882    1551 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f\": container with ID starting with a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f not found: ID does not exist" containerID="a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f"
	Feb 01 09:13:36 addons-642352 kubelet[1551]: I0201 09:13:36.400932    1551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f"} err="failed to get container status \"a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f\": rpc error: code = NotFound desc = could not find container \"a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f\": container with ID starting with a31ea1a1a3ab2ccbc2c39f6a89f378d7925c22580589ee76e178d6cb94e47c7f not found: ID does not exist"
	Feb 01 09:13:36 addons-642352 kubelet[1551]: I0201 09:13:36.451992    1551 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5qsc\" (UniqueName: \"kubernetes.io/projected/46dc9e1a-2137-442c-993a-921d6322672a-kube-api-access-g5qsc\") pod \"46dc9e1a-2137-442c-993a-921d6322672a\" (UID: \"46dc9e1a-2137-442c-993a-921d6322672a\") "
	Feb 01 09:13:36 addons-642352 kubelet[1551]: I0201 09:13:36.453920    1551 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46dc9e1a-2137-442c-993a-921d6322672a-kube-api-access-g5qsc" (OuterVolumeSpecName: "kube-api-access-g5qsc") pod "46dc9e1a-2137-442c-993a-921d6322672a" (UID: "46dc9e1a-2137-442c-993a-921d6322672a"). InnerVolumeSpecName "kube-api-access-g5qsc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 01 09:13:36 addons-642352 kubelet[1551]: I0201 09:13:36.552923    1551 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g5qsc\" (UniqueName: \"kubernetes.io/projected/46dc9e1a-2137-442c-993a-921d6322672a-kube-api-access-g5qsc\") on node \"addons-642352\" DevicePath \"\""
	Feb 01 09:13:37 addons-642352 kubelet[1551]: I0201 09:13:37.054967    1551 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="46dc9e1a-2137-442c-993a-921d6322672a" path="/var/lib/kubelet/pods/46dc9e1a-2137-442c-993a-921d6322672a/volumes"
	Feb 01 09:13:38 addons-642352 kubelet[1551]: I0201 09:13:38.403126    1551 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-gtfbn" podStartSLOduration=1.426987545 podCreationTimestamp="2024-02-01 09:13:35 +0000 UTC" firstStartedPulling="2024-02-01 09:13:35.891372315 +0000 UTC m=+246.969754427" lastFinishedPulling="2024-02-01 09:13:37.867464754 +0000 UTC m=+248.945846862" observedRunningTime="2024-02-01 09:13:38.402585571 +0000 UTC m=+249.480967685" watchObservedRunningTime="2024-02-01 09:13:38.40307998 +0000 UTC m=+249.481462092"
	Feb 01 09:13:39 addons-642352 kubelet[1551]: I0201 09:13:39.054991    1551 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a039fc16-f7c1-49e3-b072-9938e3b045bb" path="/var/lib/kubelet/pods/a039fc16-f7c1-49e3-b072-9938e3b045bb/volumes"
	Feb 01 09:13:39 addons-642352 kubelet[1551]: I0201 09:13:39.055347    1551 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d0fb6356-f017-476b-8046-fd70e977d8ff" path="/var/lib/kubelet/pods/d0fb6356-f017-476b-8046-fd70e977d8ff/volumes"
	Feb 01 09:13:40 addons-642352 kubelet[1551]: I0201 09:13:40.579268    1551 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc254d8c-527a-49b3-8571-5f674630e01b-webhook-cert\") pod \"cc254d8c-527a-49b3-8571-5f674630e01b\" (UID: \"cc254d8c-527a-49b3-8571-5f674630e01b\") "
	Feb 01 09:13:40 addons-642352 kubelet[1551]: I0201 09:13:40.579348    1551 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwlx6\" (UniqueName: \"kubernetes.io/projected/cc254d8c-527a-49b3-8571-5f674630e01b-kube-api-access-nwlx6\") pod \"cc254d8c-527a-49b3-8571-5f674630e01b\" (UID: \"cc254d8c-527a-49b3-8571-5f674630e01b\") "
	Feb 01 09:13:40 addons-642352 kubelet[1551]: I0201 09:13:40.581355    1551 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc254d8c-527a-49b3-8571-5f674630e01b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "cc254d8c-527a-49b3-8571-5f674630e01b" (UID: "cc254d8c-527a-49b3-8571-5f674630e01b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 01 09:13:40 addons-642352 kubelet[1551]: I0201 09:13:40.581863    1551 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc254d8c-527a-49b3-8571-5f674630e01b-kube-api-access-nwlx6" (OuterVolumeSpecName: "kube-api-access-nwlx6") pod "cc254d8c-527a-49b3-8571-5f674630e01b" (UID: "cc254d8c-527a-49b3-8571-5f674630e01b"). InnerVolumeSpecName "kube-api-access-nwlx6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 01 09:13:40 addons-642352 kubelet[1551]: I0201 09:13:40.680177    1551 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nwlx6\" (UniqueName: \"kubernetes.io/projected/cc254d8c-527a-49b3-8571-5f674630e01b-kube-api-access-nwlx6\") on node \"addons-642352\" DevicePath \"\""
	Feb 01 09:13:40 addons-642352 kubelet[1551]: I0201 09:13:40.680223    1551 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc254d8c-527a-49b3-8571-5f674630e01b-webhook-cert\") on node \"addons-642352\" DevicePath \"\""
	Feb 01 09:13:41 addons-642352 kubelet[1551]: I0201 09:13:41.054633    1551 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cc254d8c-527a-49b3-8571-5f674630e01b" path="/var/lib/kubelet/pods/cc254d8c-527a-49b3-8571-5f674630e01b/volumes"
	Feb 01 09:13:41 addons-642352 kubelet[1551]: I0201 09:13:41.401287    1551 scope.go:117] "RemoveContainer" containerID="a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0"
	Feb 01 09:13:41 addons-642352 kubelet[1551]: I0201 09:13:41.416461    1551 scope.go:117] "RemoveContainer" containerID="a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0"
	Feb 01 09:13:41 addons-642352 kubelet[1551]: E0201 09:13:41.416900    1551 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0\": container with ID starting with a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0 not found: ID does not exist" containerID="a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0"
	Feb 01 09:13:41 addons-642352 kubelet[1551]: I0201 09:13:41.416952    1551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0"} err="failed to get container status \"a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0\": rpc error: code = NotFound desc = could not find container \"a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0\": container with ID starting with a54df7cee08c7b932964eb5a1c87d9fa818f1e82d70b34643f0cf4e6a340d4c0 not found: ID does not exist"
	
	
	==> storage-provisioner [dba6c9646ec96e00d1adc43b899985ab7e90b2cb3538afc5b34b5893ea2c92e2] <==
	I0201 09:09:53.335337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0201 09:09:53.347421       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0201 09:09:53.347471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0201 09:09:53.355505       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0201 09:09:53.355727       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-642352_3810d19d-1e08-4256-b3a0-b060c69f6f92!
	I0201 09:09:53.356139       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c924034c-021f-4701-9979-d9844442e945", APIVersion:"v1", ResourceVersion:"861", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-642352_3810d19d-1e08-4256-b3a0-b060c69f6f92 became leader
	I0201 09:09:53.530672       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-642352_3810d19d-1e08-4256-b3a0-b060c69f6f92!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-642352 -n addons-642352
helpers_test.go:261: (dbg) Run:  kubectl --context addons-642352 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (163.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-571055 /tmp/TestFunctionalserialCacheCmdcacheadd_local3143688551/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 cache add minikube-local-cache-test:functional-571055
functional_test.go:1085: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 cache add minikube-local-cache-test:functional-571055: exit status 10 (878.088028ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: Failed to cache and load images: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/minikube-local-cache-test_functional-571055": write: unable to calculate manifest: blob sha256:ccb04bba9b8409f6e3284eae48eda163fb68522278a7f028ae1163c084743ecb not found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_7e50e1679d6258da4837f1c3fad1bbd23e1443bd_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1087: failed to 'cache add' local image "minikube-local-cache-test:functional-571055". args "out/minikube-linux-amd64 -p functional-571055 cache add minikube-local-cache-test:functional-571055" err exit status 10
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 cache delete minikube-local-cache-test:functional-571055
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 cache delete minikube-local-cache-test:functional-571055: exit status 30 (80.550322ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: Failed to delete images: remove /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/minikube-local-cache-test_functional-571055: no such file or directory
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_7fc17bd91201ab35c95adc2b82c8ec3b6302163c_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1092: failed to 'cache delete' local image "minikube-local-cache-test:functional-571055". args "out/minikube-linux-amd64 -p functional-571055 cache delete minikube-local-cache-test:functional-571055" err exit status 30
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-571055
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image load --daemon gcr.io/google-containers/addon-resizer:functional-571055 --alsologtostderr
functional_test.go:354: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 image load --daemon gcr.io/google-containers/addon-resizer:functional-571055 --alsologtostderr: exit status 80 (1.085668505s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:17:04.834806  992933 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:04.835110  992933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:04.835123  992933 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:04.835130  992933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:04.835347  992933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:04.836005  992933 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:04.836101  992933 cache.go:107] acquiring lock: {Name:mk2348412408711ace9e7c2b19e57d9e074cccd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0201 09:17:04.836331  992933 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-571055
	I0201 09:17:04.838175  992933 image.go:173] found gcr.io/google-containers/addon-resizer:functional-571055 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-571055 original:gcr.io/google-containers/addon-resizer:functional-571055} opener:0xc000598000 tarballImage:<nil> computed:false id:0xc00087e140 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0201 09:17:04.838206  992933 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055
	I0201 09:17:05.827734  992933 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-571055" -> "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055" took 991.649665ms
	I0201 09:17:05.831161  992933 out.go:177] 
	W0201 09:17:05.833343  992933 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0201 09:17:05.833373  992933 out.go:239] * 
	* 
	W0201 09:17:05.842927  992933 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0201 09:17:05.845130  992933 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:356: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:17:04.834806  992933 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:04.835110  992933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:04.835123  992933 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:04.835130  992933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:04.835347  992933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:04.836005  992933 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:04.836101  992933 cache.go:107] acquiring lock: {Name:mk2348412408711ace9e7c2b19e57d9e074cccd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0201 09:17:04.836331  992933 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-571055
	I0201 09:17:04.838175  992933 image.go:173] found gcr.io/google-containers/addon-resizer:functional-571055 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-571055 original:gcr.io/google-containers/addon-resizer:functional-571055} opener:0xc000598000 tarballImage:<nil> computed:false id:0xc00087e140 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0201 09:17:04.838206  992933 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055
	I0201 09:17:05.827734  992933 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-571055" -> "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055" took 991.649665ms
	I0201 09:17:05.831161  992933 out.go:177] 
	W0201 09:17:05.833343  992933 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0201 09:17:05.833373  992933 out.go:239] * 
	* 
	W0201 09:17:05.842927  992933 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0201 09:17:05.845130  992933 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image load --daemon gcr.io/google-containers/addon-resizer:functional-571055 --alsologtostderr
functional_test.go:364: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 image load --daemon gcr.io/google-containers/addon-resizer:functional-571055 --alsologtostderr: exit status 80 (710.188124ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:17:05.912157  992948 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:05.912309  992948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:05.912321  992948 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:05.912326  992948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:05.912556  992948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:05.913283  992948 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:05.913352  992948 cache.go:107] acquiring lock: {Name:mk2348412408711ace9e7c2b19e57d9e074cccd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0201 09:17:05.913452  992948 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-571055
	I0201 09:17:05.915412  992948 image.go:173] found gcr.io/google-containers/addon-resizer:functional-571055 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-571055 original:gcr.io/google-containers/addon-resizer:functional-571055} opener:0xc00016c000 tarballImage:<nil> computed:false id:0xc000ca40e0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0201 09:17:05.915450  992948 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055
	I0201 09:17:06.536928  992948 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-571055" -> "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055" took 623.584013ms
	I0201 09:17:06.539935  992948 out.go:177] 
	W0201 09:17:06.541566  992948 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0201 09:17:06.541597  992948 out.go:239] * 
	* 
	W0201 09:17:06.553220  992948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0201 09:17:06.555254  992948 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:366: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:17:05.912157  992948 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:05.912309  992948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:05.912321  992948 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:05.912326  992948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:05.912556  992948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:05.913283  992948 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:05.913352  992948 cache.go:107] acquiring lock: {Name:mk2348412408711ace9e7c2b19e57d9e074cccd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0201 09:17:05.913452  992948 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-571055
	I0201 09:17:05.915412  992948 image.go:173] found gcr.io/google-containers/addon-resizer:functional-571055 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-571055 original:gcr.io/google-containers/addon-resizer:functional-571055} opener:0xc00016c000 tarballImage:<nil> computed:false id:0xc000ca40e0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0201 09:17:05.915450  992948 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055
	I0201 09:17:06.536928  992948 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-571055" -> "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055" took 623.584013ms
	I0201 09:17:06.539935  992948 out.go:177] 
	W0201 09:17:06.541566  992948 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0201 09:17:06.541597  992948 out.go:239] * 
	* 
	W0201 09:17:06.553220  992948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0201 09:17:06.555254  992948 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.050197309s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-571055
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image load --daemon gcr.io/google-containers/addon-resizer:functional-571055 --alsologtostderr
functional_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 image load --daemon gcr.io/google-containers/addon-resizer:functional-571055 --alsologtostderr: exit status 80 (1.114668832s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:17:08.709033  992985 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:08.709284  992985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:08.709292  992985 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:08.709297  992985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:08.709501  992985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:08.710095  992985 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:08.710169  992985 cache.go:107] acquiring lock: {Name:mk2348412408711ace9e7c2b19e57d9e074cccd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0201 09:17:08.710262  992985 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-571055
	I0201 09:17:08.712079  992985 image.go:173] found gcr.io/google-containers/addon-resizer:functional-571055 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-571055 original:gcr.io/google-containers/addon-resizer:functional-571055} opener:0xc0000f0c40 tarballImage:<nil> computed:false id:0xc0009f6080 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0201 09:17:08.712122  992985 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055
	I0201 09:17:09.679167  992985 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-571055" -> "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055" took 969.008781ms
	I0201 09:17:09.682771  992985 out.go:177] 
	W0201 09:17:09.732075  992985 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0201 09:17:09.732108  992985 out.go:239] * 
	* 
	W0201 09:17:09.746187  992985 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0201 09:17:09.748612  992985 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:246: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:17:08.709033  992985 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:08.709284  992985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:08.709292  992985 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:08.709297  992985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:08.709501  992985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:08.710095  992985 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:08.710169  992985 cache.go:107] acquiring lock: {Name:mk2348412408711ace9e7c2b19e57d9e074cccd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0201 09:17:08.710262  992985 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-571055
	I0201 09:17:08.712079  992985 image.go:173] found gcr.io/google-containers/addon-resizer:functional-571055 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-571055 original:gcr.io/google-containers/addon-resizer:functional-571055} opener:0xc0000f0c40 tarballImage:<nil> computed:false id:0xc0009f6080 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0201 09:17:08.712122  992985 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055
	I0201 09:17:09.679167  992985 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-571055" -> "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055" took 969.008781ms
	I0201 09:17:09.682771  992985 out.go:177] 
	W0201 09:17:09.732075  992985 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0201 09:17:09.732108  992985 out.go:239] * 
	* 
	W0201 09:17:09.746187  992985 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0201 09:17:09.748612  992985 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image save gcr.io/google-containers/addon-resizer:functional-571055 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-571055 image save gcr.io/google-containers/addon-resizer:functional-571055 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.966629675s)
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0201 09:17:12.829903  993708 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:12.830066  993708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:12.830096  993708 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:12.830105  993708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:12.830372  993708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:12.831295  993708 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:12.831464  993708 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:12.832083  993708 cli_runner.go:164] Run: docker container inspect functional-571055 --format={{.State.Status}}
	I0201 09:17:12.854214  993708 ssh_runner.go:195] Run: systemctl --version
	I0201 09:17:12.854314  993708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-571055
	I0201 09:17:12.873478  993708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/functional-571055/id_rsa Username:docker}
	I0201 09:17:12.970946  993708 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar
	W0201 09:17:12.971007  993708 cache_images.go:254] Failed to load cached images for profile functional-571055. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar: no such file or directory
	I0201 09:17:12.971024  993708 cache_images.go:262] succeeded pushing to: 
	I0201 09:17:12.971029  993708 cache_images.go:263] failed pushing to: functional-571055

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-571055
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image save --daemon gcr.io/google-containers/addon-resizer:functional-571055 --alsologtostderr
functional_test.go:423: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 image save --daemon gcr.io/google-containers/addon-resizer:functional-571055 --alsologtostderr: exit status 80 (2.075125851s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:17:13.063295  993750 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:13.063416  993750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:13.063425  993750 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:13.063430  993750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:13.063676  993750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:13.064361  993750 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:13.064410  993750 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-571055"]
	I0201 09:17:13.064545  993750 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:13.065125  993750 cli_runner.go:164] Run: docker container inspect functional-571055 --format={{.State.Status}}
	I0201 09:17:13.085104  993750 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-571055]
	I0201 09:17:13.085244  993750 ssh_runner.go:195] Run: systemctl --version
	I0201 09:17:13.085334  993750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-571055
	I0201 09:17:13.103553  993750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/functional-571055/id_rsa Username:docker}
	I0201 09:17:13.195289  993750 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/google-containers/addon-resizer:functional-571055
	I0201 09:17:15.052527  993750 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/google-containers/addon-resizer:functional-571055: (1.857190082s)
	I0201 09:17:15.052611  993750 cache_images.go:345] SaveImages completed in 1.967469129s
	W0201 09:17:15.052622  993750 cache_images.go:442] Failed to load cached images for profile functional-571055. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-571055 not found
	I0201 09:17:15.052643  993750 cache_images.go:450] succeeded pulling from : 
	I0201 09:17:15.052649  993750 cache_images.go:451] failed pulling from : functional-571055
	I0201 09:17:15.055299  993750 out.go:177] 
	W0201 09:17:15.056790  993750 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055: no such file or directory
	W0201 09:17:15.056814  993750 out.go:239] * 
	* 
	W0201 09:17:15.066807  993750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0201 09:17:15.068696  993750 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:425: saving image from minikube to daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:17:13.063295  993750 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:13.063416  993750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:13.063425  993750 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:13.063430  993750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:13.063676  993750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:13.064361  993750 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:13.064410  993750 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-571055"]
	I0201 09:17:13.064545  993750 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:13.065125  993750 cli_runner.go:164] Run: docker container inspect functional-571055 --format={{.State.Status}}
	I0201 09:17:13.085104  993750 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-571055]
	I0201 09:17:13.085244  993750 ssh_runner.go:195] Run: systemctl --version
	I0201 09:17:13.085334  993750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-571055
	I0201 09:17:13.103553  993750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/functional-571055/id_rsa Username:docker}
	I0201 09:17:13.195289  993750 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/google-containers/addon-resizer:functional-571055
	I0201 09:17:15.052527  993750 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/google-containers/addon-resizer:functional-571055: (1.857190082s)
	I0201 09:17:15.052611  993750 cache_images.go:345] SaveImages completed in 1.967469129s
	W0201 09:17:15.052622  993750 cache_images.go:442] Failed to load cached images for profile functional-571055. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-571055 not found
	I0201 09:17:15.052643  993750 cache_images.go:450] succeeded pulling from : 
	I0201 09:17:15.052649  993750 cache_images.go:451] failed pulling from : functional-571055
	I0201 09:17:15.055299  993750 out.go:177] 
	W0201 09:17:15.056790  993750 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-571055: no such file or directory
	W0201 09:17:15.056814  993750 out.go:239] * 
	* 
	W0201 09:17:15.066807  993750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0201 09:17:15.068696  993750 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (182.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-518837 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-518837 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.440567879s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-518837 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-518837 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dda5f49c-d19c-47b3-a6f0-7cafe477c158] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dda5f49c-d19c-47b3-a6f0-7cafe477c158] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.004076007s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-518837 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0201 09:21:03.362485  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:21:31.046591  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:22:00.359650  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:00.364960  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:00.375321  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:00.395626  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:00.435927  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:00.516265  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:00.676693  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:00.997300  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:01.638302  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:02.918564  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:05.479625  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:22:10.600150  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-518837 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.363584412s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-518837 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-518837 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0201 09:22:20.840585  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.009901642s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-518837 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-518837 addons disable ingress-dns --alsologtostderr -v=1: (1.27543353s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-518837 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-518837 addons disable ingress --alsologtostderr -v=1: (7.453642857s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-518837
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-518837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14f5e6ce5db5b7245113929893d0cda5e42076ad33c37cbdcb282c22c666f813",
	        "Created": "2024-02-01T09:18:16.845336892Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1000415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-01T09:18:17.115949196Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/14f5e6ce5db5b7245113929893d0cda5e42076ad33c37cbdcb282c22c666f813/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14f5e6ce5db5b7245113929893d0cda5e42076ad33c37cbdcb282c22c666f813/hostname",
	        "HostsPath": "/var/lib/docker/containers/14f5e6ce5db5b7245113929893d0cda5e42076ad33c37cbdcb282c22c666f813/hosts",
	        "LogPath": "/var/lib/docker/containers/14f5e6ce5db5b7245113929893d0cda5e42076ad33c37cbdcb282c22c666f813/14f5e6ce5db5b7245113929893d0cda5e42076ad33c37cbdcb282c22c666f813-json.log",
	        "Name": "/ingress-addon-legacy-518837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-518837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-518837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c64efd6263fdd30929f8587d81451b8eb23eb1601c1d5079fbfe0214c57636b2-init/diff:/var/lib/docker/overlay2/118cd56b7cf3f8f98e5d06fe937de6e8b842264a59a088dbb73626cf7e05fed3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c64efd6263fdd30929f8587d81451b8eb23eb1601c1d5079fbfe0214c57636b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c64efd6263fdd30929f8587d81451b8eb23eb1601c1d5079fbfe0214c57636b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c64efd6263fdd30929f8587d81451b8eb23eb1601c1d5079fbfe0214c57636b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-518837",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-518837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-518837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "MacAddress": "02:42:c0:a8:31:02",
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-518837",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-518837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "211a53bb1fd164118fa75e0c55f4a7784759b5046f48db4efa238db36ee42606",
	            "SandboxKey": "/var/run/docker/netns/211a53bb1fd1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34046"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34045"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34042"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34044"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34043"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-518837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "14f5e6ce5db5",
	                        "ingress-addon-legacy-518837"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "3122a09567f4cb10ae145660399371d44be4dbeebe3d459bf212ab8a6243777d",
	                    "EndpointID": "98c92c4acbc5dcd6e40d8ab93167cf5aa0bd72c3c7d56f00df0a7070e55edd46",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-518837",
	                        "14f5e6ce5db5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-518837 -n ingress-addon-legacy-518837
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-518837 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-518837 logs -n 25: (1.184302353s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-571055 image ls                                               | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:17 UTC |
	| ssh     | functional-571055 ssh stat                                               | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:17 UTC |
	|         | /mount-9p/created-by-test                                                |                             |         |         |                     |                     |
	| ssh     | functional-571055 ssh stat                                               | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:17 UTC |
	|         | /mount-9p/created-by-pod                                                 |                             |         |         |                     |                     |
	| ssh     | functional-571055 ssh sudo                                               | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:17 UTC |
	|         | umount -f /mount-9p                                                      |                             |         |         |                     |                     |
	| ssh     | functional-571055 ssh findmnt                                            | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-571055                                                     | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdspecific-port1935960855/001:/mount-9p |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                      |                             |         |         |                     |                     |
	| ssh     | functional-571055 ssh findmnt                                            | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:17 UTC |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-571055 ssh -- ls                                              | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:17 UTC |
	|         | -la /mount-9p                                                            |                             |         |         |                     |                     |
	| ssh     | functional-571055 ssh sudo                                               | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC |                     |
	|         | umount -f /mount-9p                                                      |                             |         |         |                     |                     |
	| mount   | -p functional-571055                                                     | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560295867/001:/mount1   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-571055                                                     | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560295867/001:/mount3   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| ssh     | functional-571055 ssh findmnt                                            | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC |                     |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-571055                                                     | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560295867/001:/mount2   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| ssh     | functional-571055 ssh findmnt                                            | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:17 UTC |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| ssh     | functional-571055 ssh findmnt                                            | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:17 UTC |
	|         | -T /mount2                                                               |                             |         |         |                     |                     |
	| ssh     | functional-571055 ssh findmnt                                            | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:17 UTC |
	|         | -T /mount3                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-571055                                                     | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC |                     |
	|         | --kill=true                                                              |                             |         |         |                     |                     |
	| delete  | -p functional-571055                                                     | functional-571055           | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:17 UTC |
	| start   | -p ingress-addon-legacy-518837                                           | ingress-addon-legacy-518837 | jenkins | v1.32.0 | 01 Feb 24 09:17 UTC | 01 Feb 24 09:19 UTC |
	|         | --kubernetes-version=v1.18.20                                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                     |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-518837                                              | ingress-addon-legacy-518837 | jenkins | v1.32.0 | 01 Feb 24 09:19 UTC | 01 Feb 24 09:19 UTC |
	|         | addons enable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-518837                                              | ingress-addon-legacy-518837 | jenkins | v1.32.0 | 01 Feb 24 09:19 UTC | 01 Feb 24 09:19 UTC |
	|         | addons enable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-518837                                              | ingress-addon-legacy-518837 | jenkins | v1.32.0 | 01 Feb 24 09:20 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-518837 ip                                           | ingress-addon-legacy-518837 | jenkins | v1.32.0 | 01 Feb 24 09:22 UTC | 01 Feb 24 09:22 UTC |
	| addons  | ingress-addon-legacy-518837                                              | ingress-addon-legacy-518837 | jenkins | v1.32.0 | 01 Feb 24 09:22 UTC | 01 Feb 24 09:22 UTC |
	|         | addons disable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-518837                                              | ingress-addon-legacy-518837 | jenkins | v1.32.0 | 01 Feb 24 09:22 UTC | 01 Feb 24 09:22 UTC |
	|         | addons disable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/01 09:17:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0201 09:17:54.555209  999767 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:54.555341  999767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:54.555347  999767 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:54.555352  999767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:54.555536  999767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:54.556201  999767 out.go:303] Setting JSON to false
	I0201 09:17:54.557260  999767 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":57622,"bootTime":1706721453,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0201 09:17:54.557334  999767 start.go:138] virtualization: kvm guest
	I0201 09:17:54.560295  999767 out.go:177] * [ingress-addon-legacy-518837] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0201 09:17:54.562331  999767 out.go:177]   - MINIKUBE_LOCATION=18051
	I0201 09:17:54.562355  999767 notify.go:220] Checking for updates...
	I0201 09:17:54.564310  999767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0201 09:17:54.566307  999767 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	I0201 09:17:54.568169  999767 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	I0201 09:17:54.569896  999767 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0201 09:17:54.571643  999767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0201 09:17:54.573666  999767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0201 09:17:54.598146  999767 docker.go:122] docker version: linux-25.0.2:Docker Engine - Community
	I0201 09:17:54.598283  999767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:17:54.653107  999767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-01 09:17:54.643430463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:17:54.653203  999767 docker.go:295] overlay module found
	I0201 09:17:54.655979  999767 out.go:177] * Using the docker driver based on user configuration
	I0201 09:17:54.658014  999767 start.go:298] selected driver: docker
	I0201 09:17:54.658037  999767 start.go:902] validating driver "docker" against <nil>
	I0201 09:17:54.658067  999767 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0201 09:17:54.659310  999767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:17:54.712625  999767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-01 09:17:54.703004647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:17:54.712852  999767 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0201 09:17:54.713117  999767 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0201 09:17:54.715449  999767 out.go:177] * Using Docker driver with root privileges
	I0201 09:17:54.717441  999767 cni.go:84] Creating CNI manager for ""
	I0201 09:17:54.717466  999767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0201 09:17:54.717479  999767 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0201 09:17:54.717490  999767 start_flags.go:321] config:
	{Name:ingress-addon-legacy-518837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-518837 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0201 09:17:54.719538  999767 out.go:177] * Starting control plane node ingress-addon-legacy-518837 in cluster ingress-addon-legacy-518837
	I0201 09:17:54.721319  999767 cache.go:121] Beginning downloading kic base image for docker with crio
	I0201 09:17:54.723096  999767 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0201 09:17:54.724693  999767 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0201 09:17:54.724792  999767 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0201 09:17:54.741186  999767 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0201 09:17:54.741215  999767 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0201 09:17:54.820880  999767 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0201 09:17:54.820944  999767 cache.go:56] Caching tarball of preloaded images
	I0201 09:17:54.821146  999767 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0201 09:17:54.824039  999767 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0201 09:17:54.826746  999767 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:17:54.933107  999767 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0201 09:18:08.405626  999767 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:18:08.405730  999767 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:18:09.431762  999767 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0201 09:18:09.432184  999767 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/config.json ...
	I0201 09:18:09.432222  999767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/config.json: {Name:mkd6fdfb027508cdc85b1f973e2e0e6cd2cd2520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:18:09.432423  999767 cache.go:194] Successfully downloaded all kic artifacts
	I0201 09:18:09.432461  999767 start.go:365] acquiring machines lock for ingress-addon-legacy-518837: {Name:mkc8698d28138b5f48ed07e87f7ae25ea40aa5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0201 09:18:09.432512  999767 start.go:369] acquired machines lock for "ingress-addon-legacy-518837" in 37.107µs
	I0201 09:18:09.432533  999767 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-518837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-518837 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0201 09:18:09.432625  999767 start.go:125] createHost starting for "" (driver="docker")
	I0201 09:18:09.440500  999767 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0201 09:18:09.440791  999767 start.go:159] libmachine.API.Create for "ingress-addon-legacy-518837" (driver="docker")
	I0201 09:18:09.440832  999767 client.go:168] LocalClient.Create starting
	I0201 09:18:09.440923  999767 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem
	I0201 09:18:09.440966  999767 main.go:141] libmachine: Decoding PEM data...
	I0201 09:18:09.440984  999767 main.go:141] libmachine: Parsing certificate...
	I0201 09:18:09.441045  999767 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18051-952908/.minikube/certs/cert.pem
	I0201 09:18:09.441066  999767 main.go:141] libmachine: Decoding PEM data...
	I0201 09:18:09.441077  999767 main.go:141] libmachine: Parsing certificate...
	I0201 09:18:09.441438  999767 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-518837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0201 09:18:09.459084  999767 cli_runner.go:211] docker network inspect ingress-addon-legacy-518837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0201 09:18:09.459179  999767 network_create.go:281] running [docker network inspect ingress-addon-legacy-518837] to gather additional debugging logs...
	I0201 09:18:09.459214  999767 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-518837
	W0201 09:18:09.474821  999767 cli_runner.go:211] docker network inspect ingress-addon-legacy-518837 returned with exit code 1
	I0201 09:18:09.474854  999767 network_create.go:284] error running [docker network inspect ingress-addon-legacy-518837]: docker network inspect ingress-addon-legacy-518837: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-518837 not found
	I0201 09:18:09.474873  999767 network_create.go:286] output of [docker network inspect ingress-addon-legacy-518837]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-518837 not found
	
	** /stderr **
	I0201 09:18:09.474968  999767 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0201 09:18:09.491123  999767 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015960}
	I0201 09:18:09.491185  999767 network_create.go:124] attempt to create docker network ingress-addon-legacy-518837 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0201 09:18:09.491240  999767 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-518837 ingress-addon-legacy-518837
	I0201 09:18:09.549362  999767 network_create.go:108] docker network ingress-addon-legacy-518837 192.168.49.0/24 created
	I0201 09:18:09.549398  999767 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-518837" container
	I0201 09:18:09.549460  999767 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0201 09:18:09.566333  999767 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-518837 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-518837 --label created_by.minikube.sigs.k8s.io=true
	I0201 09:18:09.583801  999767 oci.go:103] Successfully created a docker volume ingress-addon-legacy-518837
	I0201 09:18:09.583905  999767 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-518837-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-518837 --entrypoint /usr/bin/test -v ingress-addon-legacy-518837:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0201 09:18:11.360693  999767 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-518837-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-518837 --entrypoint /usr/bin/test -v ingress-addon-legacy-518837:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.776730023s)
	I0201 09:18:11.360735  999767 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-518837
	I0201 09:18:11.360755  999767 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0201 09:18:11.360783  999767 kic.go:194] Starting extracting preloaded images to volume ...
	I0201 09:18:11.360855  999767 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-518837:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0201 09:18:16.766251  999767 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-518837:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.405343769s)
	I0201 09:18:16.766300  999767 kic.go:203] duration metric: took 5.405509 seconds to extract preloaded images to volume
	W0201 09:18:16.766482  999767 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0201 09:18:16.766611  999767 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0201 09:18:16.828813  999767 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-518837 --name ingress-addon-legacy-518837 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-518837 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-518837 --network ingress-addon-legacy-518837 --ip 192.168.49.2 --volume ingress-addon-legacy-518837:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0201 09:18:17.123611  999767 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-518837 --format={{.State.Running}}
	I0201 09:18:17.141098  999767 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-518837 --format={{.State.Status}}
	I0201 09:18:17.159548  999767 cli_runner.go:164] Run: docker exec ingress-addon-legacy-518837 stat /var/lib/dpkg/alternatives/iptables
	I0201 09:18:17.203007  999767 oci.go:144] the created container "ingress-addon-legacy-518837" has a running status.
	I0201 09:18:17.203052  999767 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18051-952908/.minikube/machines/ingress-addon-legacy-518837/id_rsa...
	I0201 09:18:17.345023  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/machines/ingress-addon-legacy-518837/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0201 09:18:17.345071  999767 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18051-952908/.minikube/machines/ingress-addon-legacy-518837/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0201 09:18:17.371413  999767 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-518837 --format={{.State.Status}}
	I0201 09:18:17.392338  999767 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0201 09:18:17.392363  999767 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-518837 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0201 09:18:17.440943  999767 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-518837 --format={{.State.Status}}
	I0201 09:18:17.460508  999767 machine.go:88] provisioning docker machine ...
	I0201 09:18:17.460545  999767 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-518837"
	I0201 09:18:17.460666  999767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-518837
	I0201 09:18:17.492129  999767 main.go:141] libmachine: Using SSH client type: native
	I0201 09:18:17.492495  999767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 127.0.0.1 34046 <nil> <nil>}
	I0201 09:18:17.492511  999767 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-518837 && echo "ingress-addon-legacy-518837" | sudo tee /etc/hostname
	I0201 09:18:17.493256  999767 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39986->127.0.0.1:34046: read: connection reset by peer
	I0201 09:18:20.641217  999767 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-518837
	
	I0201 09:18:20.641314  999767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-518837
	I0201 09:18:20.657996  999767 main.go:141] libmachine: Using SSH client type: native
	I0201 09:18:20.658348  999767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 127.0.0.1 34046 <nil> <nil>}
	I0201 09:18:20.658376  999767 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-518837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-518837/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-518837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0201 09:18:20.791109  999767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0201 09:18:20.791149  999767 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18051-952908/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-952908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-952908/.minikube}
	I0201 09:18:20.791205  999767 ubuntu.go:177] setting up certificates
	I0201 09:18:20.791228  999767 provision.go:83] configureAuth start
	I0201 09:18:20.791317  999767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-518837
	I0201 09:18:20.810116  999767 provision.go:138] copyHostCerts
	I0201 09:18:20.810165  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18051-952908/.minikube/ca.pem
	I0201 09:18:20.810197  999767 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-952908/.minikube/ca.pem, removing ...
	I0201 09:18:20.810203  999767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.pem
	I0201 09:18:20.810273  999767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-952908/.minikube/ca.pem (1078 bytes)
	I0201 09:18:20.810346  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18051-952908/.minikube/cert.pem
	I0201 09:18:20.810365  999767 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-952908/.minikube/cert.pem, removing ...
	I0201 09:18:20.810369  999767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-952908/.minikube/cert.pem
	I0201 09:18:20.810407  999767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-952908/.minikube/cert.pem (1123 bytes)
	I0201 09:18:20.810466  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18051-952908/.minikube/key.pem
	I0201 09:18:20.810490  999767 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-952908/.minikube/key.pem, removing ...
	I0201 09:18:20.810494  999767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-952908/.minikube/key.pem
	I0201 09:18:20.810524  999767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-952908/.minikube/key.pem (1675 bytes)
	I0201 09:18:20.810572  999767 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-952908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-518837 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-518837]
	I0201 09:18:21.517545  999767 provision.go:172] copyRemoteCerts
	I0201 09:18:21.517615  999767 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0201 09:18:21.517654  999767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-518837
	I0201 09:18:21.534880  999767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34046 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/ingress-addon-legacy-518837/id_rsa Username:docker}
	I0201 09:18:21.631503  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0201 09:18:21.631563  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0201 09:18:21.654425  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0201 09:18:21.654487  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0201 09:18:21.677504  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0201 09:18:21.677575  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0201 09:18:21.701410  999767 provision.go:86] duration metric: configureAuth took 910.162729ms
	I0201 09:18:21.701441  999767 ubuntu.go:193] setting minikube options for container-runtime
	I0201 09:18:21.701628  999767 config.go:182] Loaded profile config "ingress-addon-legacy-518837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0201 09:18:21.701765  999767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-518837
	I0201 09:18:21.719518  999767 main.go:141] libmachine: Using SSH client type: native
	I0201 09:18:21.719881  999767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 127.0.0.1 34046 <nil> <nil>}
	I0201 09:18:21.719898  999767 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0201 09:18:21.968770  999767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0201 09:18:21.968801  999767 machine.go:91] provisioned docker machine in 4.508269133s
	I0201 09:18:21.968815  999767 client.go:171] LocalClient.Create took 12.527977009s
	I0201 09:18:21.968841  999767 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-518837" took 12.528051292s
	I0201 09:18:21.968852  999767 start.go:300] post-start starting for "ingress-addon-legacy-518837" (driver="docker")
	I0201 09:18:21.968872  999767 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0201 09:18:21.968940  999767 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0201 09:18:21.968997  999767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-518837
	I0201 09:18:21.985740  999767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34046 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/ingress-addon-legacy-518837/id_rsa Username:docker}
	I0201 09:18:22.079188  999767 ssh_runner.go:195] Run: cat /etc/os-release
	I0201 09:18:22.082383  999767 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0201 09:18:22.082440  999767 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0201 09:18:22.082455  999767 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0201 09:18:22.082465  999767 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0201 09:18:22.082477  999767 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-952908/.minikube/addons for local assets ...
	I0201 09:18:22.082546  999767 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-952908/.minikube/files for local assets ...
	I0201 09:18:22.082655  999767 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-952908/.minikube/files/etc/ssl/certs/9597402.pem -> 9597402.pem in /etc/ssl/certs
	I0201 09:18:22.082672  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/files/etc/ssl/certs/9597402.pem -> /etc/ssl/certs/9597402.pem
	I0201 09:18:22.082786  999767 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0201 09:18:22.090464  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/files/etc/ssl/certs/9597402.pem --> /etc/ssl/certs/9597402.pem (1708 bytes)
	I0201 09:18:22.112055  999767 start.go:303] post-start completed in 143.1815ms
	I0201 09:18:22.112421  999767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-518837
	I0201 09:18:22.129001  999767 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/config.json ...
	I0201 09:18:22.129246  999767 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0201 09:18:22.129299  999767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-518837
	I0201 09:18:22.146603  999767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34046 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/ingress-addon-legacy-518837/id_rsa Username:docker}
	I0201 09:18:22.239270  999767 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0201 09:18:22.243581  999767 start.go:128] duration metric: createHost completed in 12.810942059s
	I0201 09:18:22.243602  999767 start.go:83] releasing machines lock for "ingress-addon-legacy-518837", held for 12.811077822s
	I0201 09:18:22.243683  999767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-518837
	I0201 09:18:22.260164  999767 ssh_runner.go:195] Run: cat /version.json
	I0201 09:18:22.260191  999767 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0201 09:18:22.260220  999767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-518837
	I0201 09:18:22.260272  999767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-518837
	I0201 09:18:22.277454  999767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34046 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/ingress-addon-legacy-518837/id_rsa Username:docker}
	I0201 09:18:22.277589  999767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34046 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/ingress-addon-legacy-518837/id_rsa Username:docker}
	I0201 09:18:22.370361  999767 ssh_runner.go:195] Run: systemctl --version
	I0201 09:18:22.456805  999767 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0201 09:18:22.593444  999767 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0201 09:18:22.597828  999767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0201 09:18:22.615701  999767 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0201 09:18:22.615780  999767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0201 09:18:22.642171  999767 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0201 09:18:22.642197  999767 start.go:475] detecting cgroup driver to use...
	I0201 09:18:22.642230  999767 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0201 09:18:22.642272  999767 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0201 09:18:22.656692  999767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0201 09:18:22.667466  999767 docker.go:217] disabling cri-docker service (if available) ...
	I0201 09:18:22.667523  999767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0201 09:18:22.680100  999767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0201 09:18:22.693379  999767 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0201 09:18:22.767788  999767 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0201 09:18:22.848685  999767 docker.go:233] disabling docker service ...
	I0201 09:18:22.848747  999767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0201 09:18:22.867796  999767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0201 09:18:22.878825  999767 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0201 09:18:22.954559  999767 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0201 09:18:23.030687  999767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0201 09:18:23.041258  999767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0201 09:18:23.055854  999767 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0201 09:18:23.055921  999767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0201 09:18:23.064640  999767 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0201 09:18:23.064706  999767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0201 09:18:23.073856  999767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0201 09:18:23.082459  999767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0201 09:18:23.091088  999767 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0201 09:18:23.099250  999767 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0201 09:18:23.106683  999767 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0201 09:18:23.113844  999767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0201 09:18:23.184950  999767 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0201 09:18:23.286373  999767 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0201 09:18:23.286461  999767 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0201 09:18:23.289820  999767 start.go:543] Will wait 60s for crictl version
	I0201 09:18:23.289868  999767 ssh_runner.go:195] Run: which crictl
	I0201 09:18:23.293057  999767 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0201 09:18:23.326103  999767 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0201 09:18:23.326200  999767 ssh_runner.go:195] Run: crio --version
	I0201 09:18:23.360938  999767 ssh_runner.go:195] Run: crio --version
	I0201 09:18:23.397917  999767 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0201 09:18:23.399311  999767 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-518837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0201 09:18:23.416261  999767 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0201 09:18:23.420019  999767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0201 09:18:23.430272  999767 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0201 09:18:23.430361  999767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0201 09:18:23.475631  999767 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0201 09:18:23.475697  999767 ssh_runner.go:195] Run: which lz4
	I0201 09:18:23.479158  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0201 09:18:23.479238  999767 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0201 09:18:23.482357  999767 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0201 09:18:23.482383  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0201 09:18:24.460264  999767 crio.go:444] Took 0.981037 seconds to copy over tarball
	I0201 09:18:24.460330  999767 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0201 09:18:26.821049  999767 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.360681259s)
	I0201 09:18:26.821099  999767 crio.go:451] Took 2.360805 seconds to extract the tarball
	I0201 09:18:26.821114  999767 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0201 09:18:26.897962  999767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0201 09:18:26.930308  999767 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0201 09:18:26.930335  999767 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0201 09:18:26.930442  999767 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0201 09:18:26.930466  999767 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0201 09:18:26.930467  999767 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0201 09:18:26.930493  999767 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0201 09:18:26.930506  999767 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0201 09:18:26.930442  999767 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0201 09:18:26.930470  999767 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0201 09:18:26.930446  999767 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0201 09:18:26.931750  999767 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0201 09:18:26.931756  999767 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0201 09:18:26.931749  999767 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0201 09:18:26.931758  999767 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0201 09:18:26.931817  999767 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0201 09:18:26.932010  999767 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0201 09:18:26.932013  999767 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0201 09:18:26.932101  999767 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0201 09:18:27.104031  999767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0201 09:18:27.122208  999767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0201 09:18:27.135689  999767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0201 09:18:27.141031  999767 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0201 09:18:27.141081  999767 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0201 09:18:27.141128  999767 ssh_runner.go:195] Run: which crictl
	I0201 09:18:27.142619  999767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0201 09:18:27.163736  999767 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0201 09:18:27.163788  999767 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0201 09:18:27.163840  999767 ssh_runner.go:195] Run: which crictl
	I0201 09:18:27.176375  999767 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0201 09:18:27.176419  999767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0201 09:18:27.176422  999767 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0201 09:18:27.176449  999767 ssh_runner.go:195] Run: which crictl
	I0201 09:18:27.180534  999767 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0201 09:18:27.180582  999767 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0201 09:18:27.180626  999767 ssh_runner.go:195] Run: which crictl
	I0201 09:18:27.180635  999767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0201 09:18:27.200827  999767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0201 09:18:27.217874  999767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0201 09:18:27.242141  999767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0201 09:18:27.246822  999767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0201 09:18:27.246910  999767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0201 09:18:27.255304  999767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0201 09:18:27.255391  999767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0201 09:18:27.344156  999767 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0201 09:18:27.344209  999767 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0201 09:18:27.344255  999767 ssh_runner.go:195] Run: which crictl
	I0201 09:18:27.345602  999767 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0201 09:18:27.345701  999767 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0201 09:18:27.345767  999767 ssh_runner.go:195] Run: which crictl
	I0201 09:18:27.355052  999767 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0201 09:18:27.355106  999767 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0201 09:18:27.355133  999767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0201 09:18:27.355146  999767 ssh_runner.go:195] Run: which crictl
	I0201 09:18:27.358232  999767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0201 09:18:27.358294  999767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0201 09:18:27.358333  999767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0201 09:18:27.358432  999767 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0201 09:18:27.449524  999767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0201 09:18:27.449644  999767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0201 09:18:27.449673  999767 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0201 09:18:27.751106  999767 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0201 09:18:27.888213  999767 cache_images.go:92] LoadImages completed in 957.859336ms
	W0201 09:18:27.888310  999767 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18051-952908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I0201 09:18:27.888379  999767 ssh_runner.go:195] Run: crio config
	I0201 09:18:27.932554  999767 cni.go:84] Creating CNI manager for ""
	I0201 09:18:27.932576  999767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0201 09:18:27.932594  999767 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0201 09:18:27.932613  999767 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-518837 NodeName:ingress-addon-legacy-518837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0201 09:18:27.932759  999767 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-518837"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0201 09:18:27.932847  999767 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-518837 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-518837 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0201 09:18:27.932909  999767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0201 09:18:27.941621  999767 binaries.go:44] Found k8s binaries, skipping transfer
	I0201 09:18:27.941691  999767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0201 09:18:27.950046  999767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0201 09:18:27.966551  999767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0201 09:18:27.983022  999767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0201 09:18:27.999416  999767 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0201 09:18:28.002785  999767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0201 09:18:28.012799  999767 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837 for IP: 192.168.49.2
	I0201 09:18:28.012836  999767 certs.go:190] acquiring lock for shared ca certs: {Name:mk23a064dbf71f5683ee734795fa9d1b12119a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:18:28.012990  999767 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.key
	I0201 09:18:28.013026  999767 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.key
	I0201 09:18:28.013069  999767 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.key
	I0201 09:18:28.013081  999767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt with IP's: []
	I0201 09:18:28.387571  999767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt ...
	I0201 09:18:28.387605  999767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: {Name:mkdad59dba8be0382429cc98209beec6dbbbf2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:18:28.387784  999767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.key ...
	I0201 09:18:28.387800  999767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.key: {Name:mk88b300c0b76cf4b02c33d644067473ff39cf2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:18:28.387872  999767 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.key.dd3b5fb2
	I0201 09:18:28.387887  999767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0201 09:18:28.522713  999767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.crt.dd3b5fb2 ...
	I0201 09:18:28.522748  999767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.crt.dd3b5fb2: {Name:mkab9832ba7e37a3833085a105416a6a9b80f9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:18:28.522921  999767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.key.dd3b5fb2 ...
	I0201 09:18:28.522937  999767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.key.dd3b5fb2: {Name:mk0c28a1e11f4e0b4ce306ee74812e4a475762cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:18:28.523003  999767 certs.go:337] copying /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.crt
	I0201 09:18:28.523063  999767 certs.go:341] copying /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.key
	I0201 09:18:28.523110  999767 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/proxy-client.key
	I0201 09:18:28.523127  999767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/proxy-client.crt with IP's: []
	I0201 09:18:28.637530  999767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/proxy-client.crt ...
	I0201 09:18:28.637572  999767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/proxy-client.crt: {Name:mk1d962cd09ad16c0aa0d900eb37c5690761cc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:18:28.637735  999767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/proxy-client.key ...
	I0201 09:18:28.637750  999767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/proxy-client.key: {Name:mk459b15156ad88bf4891b698b5c58569b8d769a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:18:28.637855  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0201 09:18:28.637877  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0201 09:18:28.637886  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0201 09:18:28.637896  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0201 09:18:28.637905  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0201 09:18:28.637916  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0201 09:18:28.637926  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0201 09:18:28.637940  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0201 09:18:28.637991  999767 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/home/jenkins/minikube-integration/18051-952908/.minikube/certs/959740.pem (1338 bytes)
	W0201 09:18:28.638029  999767 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-952908/.minikube/certs/home/jenkins/minikube-integration/18051-952908/.minikube/certs/959740_empty.pem, impossibly tiny 0 bytes
	I0201 09:18:28.638049  999767 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca-key.pem (1679 bytes)
	I0201 09:18:28.638074  999767 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/home/jenkins/minikube-integration/18051-952908/.minikube/certs/ca.pem (1078 bytes)
	I0201 09:18:28.638126  999767 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/home/jenkins/minikube-integration/18051-952908/.minikube/certs/cert.pem (1123 bytes)
	I0201 09:18:28.638149  999767 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/home/jenkins/minikube-integration/18051-952908/.minikube/certs/key.pem (1675 bytes)
	I0201 09:18:28.638196  999767 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-952908/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-952908/.minikube/files/etc/ssl/certs/9597402.pem (1708 bytes)
	I0201 09:18:28.638222  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0201 09:18:28.638235  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/certs/959740.pem -> /usr/share/ca-certificates/959740.pem
	I0201 09:18:28.638244  999767 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-952908/.minikube/files/etc/ssl/certs/9597402.pem -> /usr/share/ca-certificates/9597402.pem
	I0201 09:18:28.638834  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0201 09:18:28.662936  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0201 09:18:28.686256  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0201 09:18:28.709080  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0201 09:18:28.731838  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0201 09:18:28.754967  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0201 09:18:28.777725  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0201 09:18:28.800392  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0201 09:18:28.825462  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0201 09:18:28.851457  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/certs/959740.pem --> /usr/share/ca-certificates/959740.pem (1338 bytes)
	I0201 09:18:28.875034  999767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-952908/.minikube/files/etc/ssl/certs/9597402.pem --> /usr/share/ca-certificates/9597402.pem (1708 bytes)
	I0201 09:18:28.898105  999767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0201 09:18:28.914661  999767 ssh_runner.go:195] Run: openssl version
	I0201 09:18:28.919943  999767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/959740.pem && ln -fs /usr/share/ca-certificates/959740.pem /etc/ssl/certs/959740.pem"
	I0201 09:18:28.929190  999767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/959740.pem
	I0201 09:18:28.932982  999767 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb  1 09:14 /usr/share/ca-certificates/959740.pem
	I0201 09:18:28.933042  999767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/959740.pem
	I0201 09:18:28.939782  999767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/959740.pem /etc/ssl/certs/51391683.0"
	I0201 09:18:28.948887  999767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9597402.pem && ln -fs /usr/share/ca-certificates/9597402.pem /etc/ssl/certs/9597402.pem"
	I0201 09:18:28.957823  999767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9597402.pem
	I0201 09:18:28.961186  999767 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb  1 09:14 /usr/share/ca-certificates/9597402.pem
	I0201 09:18:28.961260  999767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9597402.pem
	I0201 09:18:28.967794  999767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9597402.pem /etc/ssl/certs/3ec20f2e.0"
	I0201 09:18:28.977009  999767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0201 09:18:28.986445  999767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0201 09:18:28.990019  999767 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb  1 09:09 /usr/share/ca-certificates/minikubeCA.pem
	I0201 09:18:28.990080  999767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0201 09:18:28.996837  999767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0201 09:18:29.006108  999767 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0201 09:18:29.009435  999767 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0201 09:18:29.009490  999767 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-518837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-518837 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0201 09:18:29.009583  999767 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0201 09:18:29.009641  999767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0201 09:18:29.044607  999767 cri.go:89] found id: ""
	I0201 09:18:29.044675  999767 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0201 09:18:29.053041  999767 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0201 09:18:29.061321  999767 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0201 09:18:29.061403  999767 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0201 09:18:29.069518  999767 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0201 09:18:29.069589  999767 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0201 09:18:29.115697  999767 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0201 09:18:29.115781  999767 kubeadm.go:322] [preflight] Running pre-flight checks
	I0201 09:18:29.155568  999767 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0201 09:18:29.155630  999767 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-gcp
	I0201 09:18:29.155713  999767 kubeadm.go:322] OS: Linux
	I0201 09:18:29.155769  999767 kubeadm.go:322] CGROUPS_CPU: enabled
	I0201 09:18:29.155833  999767 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0201 09:18:29.155878  999767 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0201 09:18:29.155958  999767 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0201 09:18:29.156019  999767 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0201 09:18:29.156060  999767 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0201 09:18:29.225133  999767 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0201 09:18:29.225280  999767 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0201 09:18:29.225396  999767 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0201 09:18:29.417491  999767 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0201 09:18:29.418598  999767 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0201 09:18:29.418670  999767 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0201 09:18:29.498741  999767 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0201 09:18:29.502646  999767 out.go:204]   - Generating certificates and keys ...
	I0201 09:18:29.502728  999767 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0201 09:18:29.502807  999767 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0201 09:18:29.575385  999767 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0201 09:18:29.816610  999767 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0201 09:18:29.925956  999767 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0201 09:18:29.995737  999767 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0201 09:18:30.153802  999767 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0201 09:18:30.153986  999767 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-518837 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0201 09:18:30.263097  999767 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0201 09:18:30.263255  999767 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-518837 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0201 09:18:30.398789  999767 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0201 09:18:30.636737  999767 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0201 09:18:30.849112  999767 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0201 09:18:30.849201  999767 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0201 09:18:30.924067  999767 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0201 09:18:30.976789  999767 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0201 09:18:31.069408  999767 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0201 09:18:31.208634  999767 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0201 09:18:31.209296  999767 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0201 09:18:31.211450  999767 out.go:204]   - Booting up control plane ...
	I0201 09:18:31.211587  999767 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0201 09:18:31.216228  999767 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0201 09:18:31.217208  999767 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0201 09:18:31.217910  999767 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0201 09:18:31.219728  999767 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0201 09:18:38.222295  999767 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002527 seconds
	I0201 09:18:38.222447  999767 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0201 09:18:38.233399  999767 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0201 09:18:38.748531  999767 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0201 09:18:38.748755  999767 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-518837 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0201 09:18:39.256084  999767 kubeadm.go:322] [bootstrap-token] Using token: 35ax33.ybewxqsqejm3oki3
	I0201 09:18:39.257666  999767 out.go:204]   - Configuring RBAC rules ...
	I0201 09:18:39.257778  999767 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0201 09:18:39.262126  999767 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0201 09:18:39.268300  999767 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0201 09:18:39.270329  999767 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0201 09:18:39.272388  999767 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0201 09:18:39.274251  999767 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0201 09:18:39.281108  999767 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0201 09:18:39.554461  999767 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0201 09:18:39.669713  999767 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0201 09:18:39.670831  999767 kubeadm.go:322] 
	I0201 09:18:39.670919  999767 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0201 09:18:39.670935  999767 kubeadm.go:322] 
	I0201 09:18:39.671017  999767 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0201 09:18:39.671025  999767 kubeadm.go:322] 
	I0201 09:18:39.671072  999767 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0201 09:18:39.671182  999767 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0201 09:18:39.671279  999767 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0201 09:18:39.671297  999767 kubeadm.go:322] 
	I0201 09:18:39.671373  999767 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0201 09:18:39.671476  999767 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0201 09:18:39.671570  999767 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0201 09:18:39.671579  999767 kubeadm.go:322] 
	I0201 09:18:39.671676  999767 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0201 09:18:39.671752  999767 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0201 09:18:39.671759  999767 kubeadm.go:322] 
	I0201 09:18:39.671889  999767 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 35ax33.ybewxqsqejm3oki3 \
	I0201 09:18:39.672097  999767 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7910553c67bf33c7893af1499c33a494f0bc07d5d4917285901e8697cae63a23 \
	I0201 09:18:39.672186  999767 kubeadm.go:322]     --control-plane 
	I0201 09:18:39.672200  999767 kubeadm.go:322] 
	I0201 09:18:39.672300  999767 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0201 09:18:39.672315  999767 kubeadm.go:322] 
	I0201 09:18:39.672416  999767 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 35ax33.ybewxqsqejm3oki3 \
	I0201 09:18:39.672545  999767 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7910553c67bf33c7893af1499c33a494f0bc07d5d4917285901e8697cae63a23 
	I0201 09:18:39.674301  999767 kubeadm.go:322] W0201 09:18:29.115236    1384 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0201 09:18:39.674559  999767 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-gcp\n", err: exit status 1
	I0201 09:18:39.674729  999767 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0201 09:18:39.674880  999767 kubeadm.go:322] W0201 09:18:31.215880    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0201 09:18:39.675003  999767 kubeadm.go:322] W0201 09:18:31.216980    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0201 09:18:39.675032  999767 cni.go:84] Creating CNI manager for ""
	I0201 09:18:39.675046  999767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0201 09:18:39.676876  999767 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0201 09:18:39.678252  999767 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0201 09:18:39.682265  999767 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0201 09:18:39.682286  999767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0201 09:18:39.699775  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0201 09:18:40.142665  999767 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0201 09:18:40.142770  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:40.142772  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=ingress-addon-legacy-518837 minikube.k8s.io/updated_at=2024_02_01T09_18_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:40.250826  999767 ops.go:34] apiserver oom_adj: -16
	I0201 09:18:40.250856  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:40.751543  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:41.251629  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:41.751128  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:42.251498  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:42.751630  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:43.250911  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:43.751589  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:44.251676  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:44.750967  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:45.251617  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:45.751554  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:46.250946  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:46.751656  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:47.251019  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:47.751794  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:48.250906  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:48.751639  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:49.251768  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:49.751433  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:50.251204  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:50.751641  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:51.251533  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:51.751195  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:52.251617  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:52.751595  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:53.251098  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:53.751644  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:54.251640  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:54.751915  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:55.251641  999767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0201 09:18:55.319575  999767 kubeadm.go:1088] duration metric: took 15.176874446s to wait for elevateKubeSystemPrivileges.
	I0201 09:18:55.319614  999767 kubeadm.go:406] StartCluster complete in 26.310128084s
	I0201 09:18:55.319637  999767 settings.go:142] acquiring lock: {Name:mk0819893db79284ba714854fba438996c690ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:18:55.319728  999767 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-952908/kubeconfig
	I0201 09:18:55.320535  999767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/kubeconfig: {Name:mk4dec6d7936952ed996b642fbbfa2a496c41523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:18:55.320817  999767 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0201 09:18:55.320978  999767 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0201 09:18:55.321076  999767 config.go:182] Loaded profile config "ingress-addon-legacy-518837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0201 09:18:55.321083  999767 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-518837"
	I0201 09:18:55.321101  999767 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-518837"
	I0201 09:18:55.321123  999767 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-518837"
	I0201 09:18:55.321135  999767 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-518837"
	I0201 09:18:55.321197  999767 host.go:66] Checking if "ingress-addon-legacy-518837" exists ...
	I0201 09:18:55.321634  999767 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-518837 --format={{.State.Status}}
	I0201 09:18:55.321614  999767 kapi.go:59] client config for ingress-addon-legacy-518837: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.key", CAFile:"/home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0201 09:18:55.321794  999767 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-518837 --format={{.State.Status}}
	I0201 09:18:55.322432  999767 cert_rotation.go:137] Starting client certificate rotation controller
	I0201 09:18:55.344502  999767 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0201 09:18:55.346634  999767 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0201 09:18:55.346686  999767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0201 09:18:55.346772  999767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-518837
	I0201 09:18:55.349176  999767 kapi.go:59] client config for ingress-addon-legacy-518837: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.key", CAFile:"/home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0201 09:18:55.349536  999767 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-518837"
	I0201 09:18:55.349582  999767 host.go:66] Checking if "ingress-addon-legacy-518837" exists ...
	I0201 09:18:55.350005  999767 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-518837 --format={{.State.Status}}
	I0201 09:18:55.368685  999767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34046 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/ingress-addon-legacy-518837/id_rsa Username:docker}
	I0201 09:18:55.372921  999767 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0201 09:18:55.372946  999767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0201 09:18:55.373010  999767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-518837
	I0201 09:18:55.399349  999767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34046 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/ingress-addon-legacy-518837/id_rsa Username:docker}
	I0201 09:18:55.444223  999767 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0201 09:18:55.553734  999767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0201 09:18:55.555500  999767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0201 09:18:55.832380  999767 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-518837" context rescaled to 1 replicas
	I0201 09:18:55.832440  999767 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0201 09:18:55.835108  999767 out.go:177] * Verifying Kubernetes components...
	I0201 09:18:55.836504  999767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0201 09:18:55.847559  999767 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0201 09:18:56.136657  999767 kapi.go:59] client config for ingress-addon-legacy-518837: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.key", CAFile:"/home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0201 09:18:56.137063  999767 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-518837" to be "Ready" ...
	I0201 09:18:56.143538  999767 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0201 09:18:56.144547  999767 addons.go:505] enable addons completed in 823.583004ms: enabled=[storage-provisioner default-storageclass]
	I0201 09:18:58.140234  999767 node_ready.go:58] node "ingress-addon-legacy-518837" has status "Ready":"False"
	I0201 09:19:00.140916  999767 node_ready.go:58] node "ingress-addon-legacy-518837" has status "Ready":"False"
	I0201 09:19:02.640379  999767 node_ready.go:58] node "ingress-addon-legacy-518837" has status "Ready":"False"
	I0201 09:19:04.640711  999767 node_ready.go:58] node "ingress-addon-legacy-518837" has status "Ready":"False"
	I0201 09:19:07.140508  999767 node_ready.go:58] node "ingress-addon-legacy-518837" has status "Ready":"False"
	I0201 09:19:09.140995  999767 node_ready.go:58] node "ingress-addon-legacy-518837" has status "Ready":"False"
	I0201 09:19:10.140662  999767 node_ready.go:49] node "ingress-addon-legacy-518837" has status "Ready":"True"
	I0201 09:19:10.140699  999767 node_ready.go:38] duration metric: took 14.003584727s waiting for node "ingress-addon-legacy-518837" to be "Ready" ...
	I0201 09:19:10.140710  999767 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0201 09:19:10.147505  999767 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-67vrw" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:12.151025  999767 pod_ready.go:102] pod "coredns-66bff467f8-67vrw" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-01 09:18:54 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0201 09:19:14.153941  999767 pod_ready.go:102] pod "coredns-66bff467f8-67vrw" in "kube-system" namespace has status "Ready":"False"
	I0201 09:19:16.653724  999767 pod_ready.go:102] pod "coredns-66bff467f8-67vrw" in "kube-system" namespace has status "Ready":"False"
	I0201 09:19:19.154449  999767 pod_ready.go:102] pod "coredns-66bff467f8-67vrw" in "kube-system" namespace has status "Ready":"False"
	I0201 09:19:20.154033  999767 pod_ready.go:92] pod "coredns-66bff467f8-67vrw" in "kube-system" namespace has status "Ready":"True"
	I0201 09:19:20.154059  999767 pod_ready.go:81] duration metric: took 10.006522967s waiting for pod "coredns-66bff467f8-67vrw" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.154069  999767 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-518837" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.158358  999767 pod_ready.go:92] pod "etcd-ingress-addon-legacy-518837" in "kube-system" namespace has status "Ready":"True"
	I0201 09:19:20.158378  999767 pod_ready.go:81] duration metric: took 4.303317ms waiting for pod "etcd-ingress-addon-legacy-518837" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.158388  999767 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-518837" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.162435  999767 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-518837" in "kube-system" namespace has status "Ready":"True"
	I0201 09:19:20.162459  999767 pod_ready.go:81] duration metric: took 4.063212ms waiting for pod "kube-apiserver-ingress-addon-legacy-518837" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.162471  999767 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-518837" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.166323  999767 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-518837" in "kube-system" namespace has status "Ready":"True"
	I0201 09:19:20.166343  999767 pod_ready.go:81] duration metric: took 3.864346ms waiting for pod "kube-controller-manager-ingress-addon-legacy-518837" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.166351  999767 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wmff" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.170330  999767 pod_ready.go:92] pod "kube-proxy-2wmff" in "kube-system" namespace has status "Ready":"True"
	I0201 09:19:20.170348  999767 pod_ready.go:81] duration metric: took 3.992002ms waiting for pod "kube-proxy-2wmff" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.170356  999767 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-518837" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.349808  999767 request.go:629] Waited for 179.360487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-518837
	I0201 09:19:20.549205  999767 request.go:629] Waited for 196.402573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-518837
	I0201 09:19:20.552166  999767 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-518837" in "kube-system" namespace has status "Ready":"True"
	I0201 09:19:20.552189  999767 pod_ready.go:81] duration metric: took 381.826833ms waiting for pod "kube-scheduler-ingress-addon-legacy-518837" in "kube-system" namespace to be "Ready" ...
	I0201 09:19:20.552202  999767 pod_ready.go:38] duration metric: took 10.411480225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0201 09:19:20.552218  999767 api_server.go:52] waiting for apiserver process to appear ...
	I0201 09:19:20.552275  999767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0201 09:19:20.563384  999767 api_server.go:72] duration metric: took 24.730891284s to wait for apiserver process to appear ...
	I0201 09:19:20.563411  999767 api_server.go:88] waiting for apiserver healthz status ...
	I0201 09:19:20.563432  999767 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0201 09:19:20.568239  999767 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0201 09:19:20.569144  999767 api_server.go:141] control plane version: v1.18.20
	I0201 09:19:20.569170  999767 api_server.go:131] duration metric: took 5.752126ms to wait for apiserver health ...
	I0201 09:19:20.569178  999767 system_pods.go:43] waiting for kube-system pods to appear ...
	I0201 09:19:20.749609  999767 request.go:629] Waited for 180.347357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0201 09:19:20.755443  999767 system_pods.go:59] 8 kube-system pods found
	I0201 09:19:20.755485  999767 system_pods.go:61] "coredns-66bff467f8-67vrw" [5b81257c-34c5-47f2-82ff-b81fdbbd881e] Running
	I0201 09:19:20.755494  999767 system_pods.go:61] "etcd-ingress-addon-legacy-518837" [c8353404-f9ef-46c0-9541-6d33eabc549c] Running
	I0201 09:19:20.755503  999767 system_pods.go:61] "kindnet-mhpcb" [d2da7411-ec1f-47c6-9546-5bd5ff089850] Running
	I0201 09:19:20.755507  999767 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-518837" [17fb4bea-042c-4a52-bd80-1a7b9afe1ad7] Running
	I0201 09:19:20.755512  999767 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-518837" [ba92f904-9f5e-4ec9-9f09-6e0531fefadf] Running
	I0201 09:19:20.755516  999767 system_pods.go:61] "kube-proxy-2wmff" [45f7c4b2-51d2-4724-8e5f-525c5c45ad31] Running
	I0201 09:19:20.755520  999767 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-518837" [78a334fb-c9dc-49ce-aa46-552bd6534a2a] Running
	I0201 09:19:20.755527  999767 system_pods.go:61] "storage-provisioner" [9772175a-6afb-49b6-b76d-76aef61539da] Running
	I0201 09:19:20.755533  999767 system_pods.go:74] duration metric: took 186.349614ms to wait for pod list to return data ...
	I0201 09:19:20.755541  999767 default_sa.go:34] waiting for default service account to be created ...
	I0201 09:19:20.948929  999767 request.go:629] Waited for 193.291474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0201 09:19:20.951481  999767 default_sa.go:45] found service account: "default"
	I0201 09:19:20.951506  999767 default_sa.go:55] duration metric: took 195.9562ms for default service account to be created ...
	I0201 09:19:20.951516  999767 system_pods.go:116] waiting for k8s-apps to be running ...
	I0201 09:19:21.148891  999767 request.go:629] Waited for 197.299222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0201 09:19:21.154285  999767 system_pods.go:86] 8 kube-system pods found
	I0201 09:19:21.154317  999767 system_pods.go:89] "coredns-66bff467f8-67vrw" [5b81257c-34c5-47f2-82ff-b81fdbbd881e] Running
	I0201 09:19:21.154323  999767 system_pods.go:89] "etcd-ingress-addon-legacy-518837" [c8353404-f9ef-46c0-9541-6d33eabc549c] Running
	I0201 09:19:21.154328  999767 system_pods.go:89] "kindnet-mhpcb" [d2da7411-ec1f-47c6-9546-5bd5ff089850] Running
	I0201 09:19:21.154332  999767 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-518837" [17fb4bea-042c-4a52-bd80-1a7b9afe1ad7] Running
	I0201 09:19:21.154336  999767 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-518837" [ba92f904-9f5e-4ec9-9f09-6e0531fefadf] Running
	I0201 09:19:21.154340  999767 system_pods.go:89] "kube-proxy-2wmff" [45f7c4b2-51d2-4724-8e5f-525c5c45ad31] Running
	I0201 09:19:21.154343  999767 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-518837" [78a334fb-c9dc-49ce-aa46-552bd6534a2a] Running
	I0201 09:19:21.154347  999767 system_pods.go:89] "storage-provisioner" [9772175a-6afb-49b6-b76d-76aef61539da] Running
	I0201 09:19:21.154354  999767 system_pods.go:126] duration metric: took 202.832539ms to wait for k8s-apps to be running ...
	I0201 09:19:21.154363  999767 system_svc.go:44] waiting for kubelet service to be running ....
	I0201 09:19:21.154447  999767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0201 09:19:21.167426  999767 system_svc.go:56] duration metric: took 13.050327ms WaitForService to wait for kubelet.
	I0201 09:19:21.167467  999767 kubeadm.go:581] duration metric: took 25.334978559s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0201 09:19:21.167496  999767 node_conditions.go:102] verifying NodePressure condition ...
	I0201 09:19:21.348894  999767 request.go:629] Waited for 181.293376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0201 09:19:21.352171  999767 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0201 09:19:21.352202  999767 node_conditions.go:123] node cpu capacity is 8
	I0201 09:19:21.352213  999767 node_conditions.go:105] duration metric: took 184.707957ms to run NodePressure ...
	I0201 09:19:21.352224  999767 start.go:228] waiting for startup goroutines ...
	I0201 09:19:21.352230  999767 start.go:233] waiting for cluster config update ...
	I0201 09:19:21.352240  999767 start.go:242] writing updated cluster config ...
	I0201 09:19:21.352501  999767 ssh_runner.go:195] Run: rm -f paused
	I0201 09:19:21.401990  999767 start.go:600] kubectl: 1.29.1, cluster: 1.18.20 (minor skew: 11)
	I0201 09:19:21.404199  999767 out.go:177] 
	W0201 09:19:21.405802  999767 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0201 09:19:21.407252  999767 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0201 09:19:21.408794  999767 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-518837" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 01 09:22:16 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:16.472968440Z" level=info msg="Starting container: 72099fa788d0b8d07ddb78eee08baf9670405de50ac5be2a493519bd68e3eddd" id=0ce729e3-9b95-4901-985e-feccf98f5c3e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Feb 01 09:22:16 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:16.480824910Z" level=info msg="Started container" PID=4860 containerID=72099fa788d0b8d07ddb78eee08baf9670405de50ac5be2a493519bd68e3eddd description=default/hello-world-app-5f5d8b66bb-w9chz/hello-world-app id=0ce729e3-9b95-4901-985e-feccf98f5c3e name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=68c5013263696db799c96d56341da814c62278a3bd0312ef0da33bd7d28f27fb
	Feb 01 09:22:29 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:29.935591693Z" level=info msg="Stopping pod sandbox: d41c3781af7d4a8d4f86340c25a18c6af3b782543cc8e1ab1e4195aa2293e270" id=1a05a9cb-4b96-4e04-bec5-e1f6eeadb972 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 01 09:22:29 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:29.937030131Z" level=info msg="Stopped pod sandbox: d41c3781af7d4a8d4f86340c25a18c6af3b782543cc8e1ab1e4195aa2293e270" id=1a05a9cb-4b96-4e04-bec5-e1f6eeadb972 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 01 09:22:29 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:29.944637718Z" level=info msg="Stopping pod sandbox: d41c3781af7d4a8d4f86340c25a18c6af3b782543cc8e1ab1e4195aa2293e270" id=3f3de2a9-dd06-48ab-a71a-53af7fb52f6e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 01 09:22:29 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:29.944729754Z" level=info msg="Stopped pod sandbox (already stopped): d41c3781af7d4a8d4f86340c25a18c6af3b782543cc8e1ab1e4195aa2293e270" id=3f3de2a9-dd06-48ab-a71a-53af7fb52f6e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 01 09:22:30 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:30.730558882Z" level=info msg="Stopping container: f76b7edac0475d0f6099be94c1c17a161f99fe6065b171ef68cdc50ee10a9b19 (timeout: 2s)" id=44fa4e09-f259-45d3-9f74-3262adb8f640 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 01 09:22:30 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:30.732552917Z" level=info msg="Stopping container: f76b7edac0475d0f6099be94c1c17a161f99fe6065b171ef68cdc50ee10a9b19 (timeout: 2s)" id=08b002d6-8d61-408f-97c8-480733444dfa name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 01 09:22:31 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:31.933714789Z" level=info msg="Stopping pod sandbox: d41c3781af7d4a8d4f86340c25a18c6af3b782543cc8e1ab1e4195aa2293e270" id=a7a1d154-4fcb-4600-9161-6ba06ed47f6c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 01 09:22:31 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:31.933785591Z" level=info msg="Stopped pod sandbox (already stopped): d41c3781af7d4a8d4f86340c25a18c6af3b782543cc8e1ab1e4195aa2293e270" id=a7a1d154-4fcb-4600-9161-6ba06ed47f6c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.738787940Z" level=warning msg="Stopping container f76b7edac0475d0f6099be94c1c17a161f99fe6065b171ef68cdc50ee10a9b19 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=44fa4e09-f259-45d3-9f74-3262adb8f640 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 01 09:22:32 ingress-addon-legacy-518837 conmon[3396]: conmon f76b7edac0475d0f6099 <ninfo>: container 3408 exited with status 137
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.890927541Z" level=info msg="Stopped container f76b7edac0475d0f6099be94c1c17a161f99fe6065b171ef68cdc50ee10a9b19: ingress-nginx/ingress-nginx-controller-7fcf777cb7-wvwxc/controller" id=08b002d6-8d61-408f-97c8-480733444dfa name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.891461738Z" level=info msg="Stopped container f76b7edac0475d0f6099be94c1c17a161f99fe6065b171ef68cdc50ee10a9b19: ingress-nginx/ingress-nginx-controller-7fcf777cb7-wvwxc/controller" id=44fa4e09-f259-45d3-9f74-3262adb8f640 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.891697394Z" level=info msg="Stopping pod sandbox: ad524e67ddd44af8486fcaffe3231b2c4b5fab8911b9f8748c2171f42434f170" id=61d8eb3d-5b41-4c7c-a10d-ea4376045676 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.891797667Z" level=info msg="Stopping pod sandbox: ad524e67ddd44af8486fcaffe3231b2c4b5fab8911b9f8748c2171f42434f170" id=bd114a85-7a8e-44df-911b-3d1eb242b973 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.894850462Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-KXQ5HDXEDQIIMFPX - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-43HSIROIQAIOT6R5 - [0:0]\n-X KUBE-HP-43HSIROIQAIOT6R5\n-X KUBE-HP-KXQ5HDXEDQIIMFPX\nCOMMIT\n"
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.896251937Z" level=info msg="Closing host port tcp:80"
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.896299775Z" level=info msg="Closing host port tcp:443"
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.897305609Z" level=info msg="Host port tcp:80 does not have an open socket"
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.897331250Z" level=info msg="Host port tcp:443 does not have an open socket"
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.897470330Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-wvwxc Namespace:ingress-nginx ID:ad524e67ddd44af8486fcaffe3231b2c4b5fab8911b9f8748c2171f42434f170 UID:cef1da88-e5af-4397-9a08-c8791b506d18 NetNS:/var/run/netns/02a7a020-4c8e-45e6-8d41-73b72d6140fb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.897592858Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-wvwxc from CNI network \"kindnet\" (type=ptp)"
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.936068049Z" level=info msg="Stopped pod sandbox: ad524e67ddd44af8486fcaffe3231b2c4b5fab8911b9f8748c2171f42434f170" id=61d8eb3d-5b41-4c7c-a10d-ea4376045676 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 01 09:22:32 ingress-addon-legacy-518837 crio[959]: time="2024-02-01 09:22:32.936227573Z" level=info msg="Stopped pod sandbox (already stopped): ad524e67ddd44af8486fcaffe3231b2c4b5fab8911b9f8748c2171f42434f170" id=bd114a85-7a8e-44df-911b-3d1eb242b973 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	72099fa788d0b       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            22 seconds ago      Running             hello-world-app           0                   68c5013263696       hello-world-app-5f5d8b66bb-w9chz
	25cff0861809c       docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da                    2 minutes ago       Running             nginx                     0                   f1899a166b8e7       nginx
	f76b7edac0475       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   ad524e67ddd44       ingress-nginx-controller-7fcf777cb7-wvwxc
	9ce5ebc3311c9       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   e0baeb563cb45       ingress-nginx-admission-patch-vpsz6
	2a8e6620f2e5b       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   0b81fcd64287b       ingress-nginx-admission-create-8jf66
	61d4c08eea15a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   df01ce5f386c4       storage-provisioner
	3aab32adf8ee7       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   38276f02b7a58       coredns-66bff467f8-67vrw
	18f98c63faa83       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   a16a74fedeccf       kindnet-mhpcb
	2d9afd8e136f4       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   e04ccce781320       kube-proxy-2wmff
	946aa0f12828f       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   8242671a5f17e       kube-scheduler-ingress-addon-legacy-518837
	de7eab1618d9f       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   0b0cc4920b1b0       kube-controller-manager-ingress-addon-legacy-518837
	6a9d7da5cc006       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   2ec678166a560       kube-apiserver-ingress-addon-legacy-518837
	3fbc9402e7ee2       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   0e07230872988       etcd-ingress-addon-legacy-518837
	
	
	==> coredns [3aab32adf8ee77266c1d22e7f5ed96ddd82386ca74833b5c2d0b1d81503e43f4] <==
	[INFO] 10.244.0.5:51782 - 65317 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005187262s
	[INFO] 10.244.0.5:60713 - 2709 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004427169s
	[INFO] 10.244.0.5:57660 - 62444 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004558126s
	[INFO] 10.244.0.5:54096 - 10845 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004208368s
	[INFO] 10.244.0.5:37044 - 24717 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004248583s
	[INFO] 10.244.0.5:51782 - 45142 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004286461s
	[INFO] 10.244.0.5:46633 - 7927 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004811127s
	[INFO] 10.244.0.5:56617 - 5115 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004274831s
	[INFO] 10.244.0.5:42160 - 11919 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004482905s
	[INFO] 10.244.0.5:57660 - 62314 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005277158s
	[INFO] 10.244.0.5:60713 - 23055 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005299805s
	[INFO] 10.244.0.5:46633 - 5535 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00517413s
	[INFO] 10.244.0.5:37044 - 40845 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005189943s
	[INFO] 10.244.0.5:57660 - 3138 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072893s
	[INFO] 10.244.0.5:60713 - 33202 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055911s
	[INFO] 10.244.0.5:56617 - 26556 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005297776s
	[INFO] 10.244.0.5:54096 - 3888 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005467931s
	[INFO] 10.244.0.5:42160 - 32821 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005176291s
	[INFO] 10.244.0.5:46633 - 46516 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102321s
	[INFO] 10.244.0.5:37044 - 26328 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000142182s
	[INFO] 10.244.0.5:56617 - 8517 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056468s
	[INFO] 10.244.0.5:51782 - 50558 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005640935s
	[INFO] 10.244.0.5:51782 - 48686 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060297s
	[INFO] 10.244.0.5:54096 - 31948 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055318s
	[INFO] 10.244.0.5:42160 - 36979 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00009306s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-518837
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-518837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=ingress-addon-legacy-518837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_01T09_18_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 01 Feb 2024 09:18:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-518837
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 01 Feb 2024 09:22:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 01 Feb 2024 09:20:10 +0000   Thu, 01 Feb 2024 09:18:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 01 Feb 2024 09:20:10 +0000   Thu, 01 Feb 2024 09:18:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 01 Feb 2024 09:20:10 +0000   Thu, 01 Feb 2024 09:18:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 01 Feb 2024 09:20:10 +0000   Thu, 01 Feb 2024 09:19:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-518837
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d334302b51348fb976e4e322f5df212
	  System UUID:                6ba756b1-ceb9-426f-9f96-d697ff515647
	  Boot ID:                    2cfa37ec-936f-4f6f-8415-4c1cf32697e8
	  Kernel Version:             5.15.0-1049-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-w9chz                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 coredns-66bff467f8-67vrw                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-518837                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kindnet-mhpcb                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m43s
	  kube-system                 kube-apiserver-ingress-addon-legacy-518837             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-518837    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-proxy-2wmff                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-scheduler-ingress-addon-legacy-518837             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m7s (x4 over 4m7s)  kubelet     Node ingress-addon-legacy-518837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x4 over 4m7s)  kubelet     Node ingress-addon-legacy-518837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x4 over 4m7s)  kubelet     Node ingress-addon-legacy-518837 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m59s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m59s                kubelet     Node ingress-addon-legacy-518837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s                kubelet     Node ingress-addon-legacy-518837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s                kubelet     Node ingress-addon-legacy-518837 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m42s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m29s                kubelet     Node ingress-addon-legacy-518837 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.007351] FS-Cache: O-key=[8] 'f4b90f0200000000'
	[  +0.004921] FS-Cache: N-cookie c=000000f9 [p=000000ed fl=2 nc=0 na=1]
	[  +0.006578] FS-Cache: N-cookie d=000000006144a7c0{9p.inode} n=000000009c94d2da
	[  +0.008751] FS-Cache: N-key=[8] 'f4b90f0200000000'
	[  +0.285627] FS-Cache: Duplicate cookie detected
	[  +0.005043] FS-Cache: O-cookie c=000000f3 [p=000000ed fl=226 nc=0 na=1]
	[  +0.007440] FS-Cache: O-cookie d=000000006144a7c0{9p.inode} n=0000000086622ce2
	[  +0.007712] FS-Cache: O-key=[8] 'ffb90f0200000000'
	[  +0.005047] FS-Cache: N-cookie c=000000fa [p=000000ed fl=2 nc=0 na=1]
	[  +0.006577] FS-Cache: N-cookie d=000000006144a7c0{9p.inode} n=00000000c1cbdf5a
	[  +0.008765] FS-Cache: N-key=[8] 'ffb90f0200000000'
	[Feb 1 09:19] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 8a dd 68 a9 7d 9a 96 0b 0d 1e 81 08 00
	[  +1.010975] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 8a dd 68 a9 7d 9a 96 0b 0d 1e 81 08 00
	[Feb 1 09:20] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000033] ll header: 00000000: 6e 8a dd 68 a9 7d 9a 96 0b 0d 1e 81 08 00
	[  +4.159661] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 8a dd 68 a9 7d 9a 96 0b 0d 1e 81 08 00
	[  +8.195398] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 8a dd 68 a9 7d 9a 96 0b 0d 1e 81 08 00
	[ +16.126795] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 8a dd 68 a9 7d 9a 96 0b 0d 1e 81 08 00
	[Feb 1 09:21] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000041] ll header: 00000000: 6e 8a dd 68 a9 7d 9a 96 0b 0d 1e 81 08 00
	
	
	==> etcd [3fbc9402e7ee29175d9f47a811410a60d2978ed7e21c9dffd8d785d20cf4f333] <==
	raft2024/02/01 09:18:32 INFO: aec36adc501070cc became follower at term 0
	raft2024/02/01 09:18:32 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/02/01 09:18:32 INFO: aec36adc501070cc became follower at term 1
	raft2024/02/01 09:18:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-01 09:18:32.649548 W | auth: simple token is not cryptographically signed
	2024-02-01 09:18:32.652489 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-02-01 09:18:32.653185 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/02/01 09:18:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-01 09:18:32.653502 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-02-01 09:18:32.654975 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-01 09:18:32.655114 I | embed: listening for peers on 192.168.49.2:2380
	2024-02-01 09:18:32.655245 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/02/01 09:18:32 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/02/01 09:18:32 INFO: aec36adc501070cc became candidate at term 2
	raft2024/02/01 09:18:32 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/02/01 09:18:32 INFO: aec36adc501070cc became leader at term 2
	raft2024/02/01 09:18:32 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-02-01 09:18:32.747304 I | etcdserver: setting up the initial cluster version to 3.4
	2024-02-01 09:18:32.748045 I | etcdserver: published {Name:ingress-addon-legacy-518837 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-02-01 09:18:32.748130 I | embed: ready to serve client requests
	2024-02-01 09:18:32.748201 I | embed: ready to serve client requests
	2024-02-01 09:18:32.749529 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-02-01 09:18:32.749617 I | embed: serving client requests on 192.168.49.2:2379
	2024-02-01 09:18:32.749680 I | etcdserver/api: enabled capabilities for version 3.4
	2024-02-01 09:18:32.750185 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 09:22:38 up 16:05,  0 users,  load average: 0.27, 0.68, 1.14
	Linux ingress-addon-legacy-518837 5.15.0-1049-gcp #57~20.04.1-Ubuntu SMP Wed Jan 17 16:04:23 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [18f98c63faa8343b821457dd5baf47cc387eecbfe63925edbf0f0642c4f040ff] <==
	I0201 09:20:32.477611       1 main.go:227] handling current node
	I0201 09:20:42.491330       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:20:42.491365       1 main.go:227] handling current node
	I0201 09:20:52.494842       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:20:52.494868       1 main.go:227] handling current node
	I0201 09:21:02.498949       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:21:02.498976       1 main.go:227] handling current node
	I0201 09:21:12.511315       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:21:12.511339       1 main.go:227] handling current node
	I0201 09:21:22.523724       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:21:22.523750       1 main.go:227] handling current node
	I0201 09:21:32.536397       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:21:32.536423       1 main.go:227] handling current node
	I0201 09:21:42.548771       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:21:42.548801       1 main.go:227] handling current node
	I0201 09:21:52.559266       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:21:52.559299       1 main.go:227] handling current node
	I0201 09:22:02.563150       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:22:02.563175       1 main.go:227] handling current node
	I0201 09:22:12.575307       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:22:12.575330       1 main.go:227] handling current node
	I0201 09:22:22.578905       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:22:22.578929       1 main.go:227] handling current node
	I0201 09:22:32.589608       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0201 09:22:32.589634       1 main.go:227] handling current node
	
	
	==> kube-apiserver [6a9d7da5cc00655eaaf1ca302af6d69966f600219002513ad81a3544eec4cce6] <==
	E0201 09:18:36.472135       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0201 09:18:36.569142       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0201 09:18:36.569734       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0201 09:18:36.569856       1 cache.go:39] Caches are synced for autoregister controller
	I0201 09:18:36.630599       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0201 09:18:36.630599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0201 09:18:37.468321       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0201 09:18:37.468349       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0201 09:18:37.472955       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0201 09:18:37.476092       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0201 09:18:37.476110       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0201 09:18:37.781058       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0201 09:18:37.815218       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0201 09:18:37.962434       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0201 09:18:37.963686       1 controller.go:609] quota admission added evaluator for: endpoints
	I0201 09:18:37.966912       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0201 09:18:38.798094       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0201 09:18:39.543779       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0201 09:18:39.661108       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0201 09:18:39.853154       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0201 09:18:54.707916       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0201 09:18:55.195631       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0201 09:18:55.195631       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0201 09:19:22.233195       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0201 09:19:49.686127       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [de7eab1618d9fdc4a597868ccba57dd6bc572acabdd8849b39b951f153b08a19] <==
	I0201 09:18:55.195283       1 shared_informer.go:230] Caches are synced for resource quota 
	I0201 09:18:55.201113       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0d218f4b-2adc-4c6f-9933-c2ad2d4c40e1", APIVersion:"apps/v1", ResourceVersion:"226", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-mhpcb
	I0201 09:18:55.202752       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"9aed4b23-2665-4dbb-bb16-e85e27c27f6f", APIVersion:"apps/v1", ResourceVersion:"211", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-2wmff
	I0201 09:18:55.210083       1 shared_informer.go:230] Caches are synced for taint 
	I0201 09:18:55.210213       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0201 09:18:55.210240       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0201 09:18:55.210525       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-518837. Assuming now as a timestamp.
	I0201 09:18:55.210669       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0201 09:18:55.210760       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-518837", UID:"6240f985-7e07-4a9d-b6a2-831dd656a6f1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-518837 event: Registered Node ingress-addon-legacy-518837 in Controller
	E0201 09:18:55.218998       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"9aed4b23-2665-4dbb-bb16-e85e27c27f6f", ResourceVersion:"211", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63842375919, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001dc7220), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc001dc7240)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001dc7260), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001579b40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc001dc7280), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001dc72a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001dc72e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001d332c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001d91598), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0006f7500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000960220)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001d915e8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0201 09:18:55.240075       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0201 09:18:55.240115       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0201 09:18:55.290048       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0201 09:18:55.342253       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"4a4cbf56-ecd8-4589-998c-f652b4aee4c3", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0201 09:18:55.360395       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4cc813f7-7c9f-4266-9cbc-e8db7a0d5180", APIVersion:"apps/v1", ResourceVersion:"365", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-jrsf8
	I0201 09:19:10.211533       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0201 09:19:22.184289       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"1fd33d0c-d0e9-427e-987a-5ae2dc737218", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0201 09:19:22.189994       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"67ac7390-bb41-49ea-a8b3-01538c9bc9cc", APIVersion:"apps/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-wvwxc
	I0201 09:19:22.244802       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6bebbb21-abb2-45ec-9e5c-f7ff27a60b33", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-8jf66
	I0201 09:19:22.257016       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b2fa9918-125f-4ce3-b2e3-c75c96a0ce86", APIVersion:"batch/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-vpsz6
	I0201 09:19:27.022376       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6bebbb21-abb2-45ec-9e5c-f7ff27a60b33", APIVersion:"batch/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0201 09:19:28.025931       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b2fa9918-125f-4ce3-b2e3-c75c96a0ce86", APIVersion:"batch/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0201 09:22:13.441592       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"ff658287-68e3-4de2-8d61-1a58bfddb15d", APIVersion:"apps/v1", ResourceVersion:"716", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0201 09:22:13.447652       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"ef4f6895-a6f4-453f-a4ee-41bbfac14f44", APIVersion:"apps/v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-w9chz
	
	
	==> kube-proxy [2d9afd8e136f40f8b90a677bcd3876c6fd1210568beff1cced5e488e9c9030ef] <==
	W0201 09:18:56.033430       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0201 09:18:56.043830       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0201 09:18:56.043864       1 server_others.go:186] Using iptables Proxier.
	I0201 09:18:56.044215       1 server.go:583] Version: v1.18.20
	I0201 09:18:56.044648       1 config.go:315] Starting service config controller
	I0201 09:18:56.044717       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0201 09:18:56.045020       1 config.go:133] Starting endpoints config controller
	I0201 09:18:56.046732       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0201 09:18:56.144909       1 shared_informer.go:230] Caches are synced for service config 
	I0201 09:18:56.147123       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [946aa0f12828f68ad98a4f01b975718eab6b45984951ac7c9f0b3bf8d5138961] <==
	I0201 09:18:36.549707       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0201 09:18:36.551842       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0201 09:18:36.551869       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0201 09:18:36.553066       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0201 09:18:36.553160       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	E0201 09:18:36.554224       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0201 09:18:36.555485       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0201 09:18:36.555964       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0201 09:18:36.556097       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0201 09:18:36.556169       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0201 09:18:36.556247       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0201 09:18:36.556308       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0201 09:18:36.556448       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0201 09:18:36.556589       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0201 09:18:36.556601       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0201 09:18:36.556717       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0201 09:18:36.556953       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0201 09:18:37.390290       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0201 09:18:37.391203       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0201 09:18:37.472083       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0201 09:18:37.525794       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0201 09:18:37.543441       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0201 09:18:37.665246       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0201 09:18:40.452059       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0201 09:18:54.730979       1 factory.go:503] pod: kube-system/coredns-66bff467f8-67vrw is already present in the active queue
	
	
	==> kubelet <==
	Feb 01 09:21:48 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:21:48.934509    1860 pod_workers.go:191] Error syncing pod 8fff40b9-fab4-4ad1-b9dc-6234ee662444 ("kube-ingress-dns-minikube_kube-system(8fff40b9-fab4-4ad1-b9dc-6234ee662444)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Feb 01 09:22:02 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:22:02.934342    1860 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 01 09:22:02 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:22:02.934434    1860 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 01 09:22:02 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:22:02.934499    1860 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 01 09:22:02 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:22:02.934533    1860 pod_workers.go:191] Error syncing pod 8fff40b9-fab4-4ad1-b9dc-6234ee662444 ("kube-ingress-dns-minikube_kube-system(8fff40b9-fab4-4ad1-b9dc-6234ee662444)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Feb 01 09:22:13 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:13.452181    1860 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Feb 01 09:22:13 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:13.530722    1860 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-q7kgq" (UniqueName: "kubernetes.io/secret/b6318f71-00f5-44f2-b96e-10b4e3b218d8-default-token-q7kgq") pod "hello-world-app-5f5d8b66bb-w9chz" (UID: "b6318f71-00f5-44f2-b96e-10b4e3b218d8")
	Feb 01 09:22:13 ingress-addon-legacy-518837 kubelet[1860]: W0201 09:22:13.786007    1860 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/14f5e6ce5db5b7245113929893d0cda5e42076ad33c37cbdcb282c22c666f813/crio-68c5013263696db799c96d56341da814c62278a3bd0312ef0da33bd7d28f27fb WatchSource:0}: Error finding container 68c5013263696db799c96d56341da814c62278a3bd0312ef0da33bd7d28f27fb: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000d24f80 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Feb 01 09:22:14 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:22:14.934930    1860 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 01 09:22:14 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:22:14.934992    1860 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 01 09:22:14 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:22:14.935057    1860 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 01 09:22:14 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:22:14.935104    1860 pod_workers.go:191] Error syncing pod 8fff40b9-fab4-4ad1-b9dc-6234ee662444 ("kube-ingress-dns-minikube_kube-system(8fff40b9-fab4-4ad1-b9dc-6234ee662444)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Feb 01 09:22:29 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:29.272794    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-q7cmg" (UniqueName: "kubernetes.io/secret/8fff40b9-fab4-4ad1-b9dc-6234ee662444-minikube-ingress-dns-token-q7cmg") pod "8fff40b9-fab4-4ad1-b9dc-6234ee662444" (UID: "8fff40b9-fab4-4ad1-b9dc-6234ee662444")
	Feb 01 09:22:29 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:29.274991    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fff40b9-fab4-4ad1-b9dc-6234ee662444-minikube-ingress-dns-token-q7cmg" (OuterVolumeSpecName: "minikube-ingress-dns-token-q7cmg") pod "8fff40b9-fab4-4ad1-b9dc-6234ee662444" (UID: "8fff40b9-fab4-4ad1-b9dc-6234ee662444"). InnerVolumeSpecName "minikube-ingress-dns-token-q7cmg". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 01 09:22:29 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:29.373202    1860 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-q7cmg" (UniqueName: "kubernetes.io/secret/8fff40b9-fab4-4ad1-b9dc-6234ee662444-minikube-ingress-dns-token-q7cmg") on node "ingress-addon-legacy-518837" DevicePath ""
	Feb 01 09:22:30 ingress-addon-legacy-518837 kubelet[1860]: W0201 09:22:30.331521    1860 pod_container_deletor.go:77] Container "d41c3781af7d4a8d4f86340c25a18c6af3b782543cc8e1ab1e4195aa2293e270" not found in pod's containers
	Feb 01 09:22:30 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:22:30.731587    1860 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-wvwxc.17afb2c6ce3c29ca", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-wvwxc", UID:"cef1da88-e5af-4397-9a08-c8791b506d18", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-518837"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc166f695ab834dca, ext:231267690302, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc166f695ab834dca, ext:231267690302, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-wvwxc.17afb2c6ce3c29ca" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 01 09:22:30 ingress-addon-legacy-518837 kubelet[1860]: E0201 09:22:30.735452    1860 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-wvwxc.17afb2c6ce3c29ca", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-wvwxc", UID:"cef1da88-e5af-4397-9a08-c8791b506d18", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-518837"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc166f695ab834dca, ext:231267690302, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc166f695aba50185, ext:231269899012, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-wvwxc.17afb2c6ce3c29ca" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 01 09:22:33 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:33.283721    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-hj27j" (UniqueName: "kubernetes.io/secret/cef1da88-e5af-4397-9a08-c8791b506d18-ingress-nginx-token-hj27j") pod "cef1da88-e5af-4397-9a08-c8791b506d18" (UID: "cef1da88-e5af-4397-9a08-c8791b506d18")
	Feb 01 09:22:33 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:33.283782    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/cef1da88-e5af-4397-9a08-c8791b506d18-webhook-cert") pod "cef1da88-e5af-4397-9a08-c8791b506d18" (UID: "cef1da88-e5af-4397-9a08-c8791b506d18")
	Feb 01 09:22:33 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:33.286016    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cef1da88-e5af-4397-9a08-c8791b506d18-ingress-nginx-token-hj27j" (OuterVolumeSpecName: "ingress-nginx-token-hj27j") pod "cef1da88-e5af-4397-9a08-c8791b506d18" (UID: "cef1da88-e5af-4397-9a08-c8791b506d18"). InnerVolumeSpecName "ingress-nginx-token-hj27j". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 01 09:22:33 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:33.286081    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cef1da88-e5af-4397-9a08-c8791b506d18-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "cef1da88-e5af-4397-9a08-c8791b506d18" (UID: "cef1da88-e5af-4397-9a08-c8791b506d18"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 01 09:22:33 ingress-addon-legacy-518837 kubelet[1860]: W0201 09:22:33.337654    1860 pod_container_deletor.go:77] Container "ad524e67ddd44af8486fcaffe3231b2c4b5fab8911b9f8748c2171f42434f170" not found in pod's containers
	Feb 01 09:22:33 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:33.384139    1860 reconciler.go:319] Volume detached for volume "ingress-nginx-token-hj27j" (UniqueName: "kubernetes.io/secret/cef1da88-e5af-4397-9a08-c8791b506d18-ingress-nginx-token-hj27j") on node "ingress-addon-legacy-518837" DevicePath ""
	Feb 01 09:22:33 ingress-addon-legacy-518837 kubelet[1860]: I0201 09:22:33.384181    1860 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/cef1da88-e5af-4397-9a08-c8791b506d18-webhook-cert") on node "ingress-addon-legacy-518837" DevicePath ""
	
	
	==> storage-provisioner [61d4c08eea15a43263014b8c4e3d5f54bb00bf17e4561fb3a333062fb6404714] <==
	I0201 09:19:14.874747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0201 09:19:14.882426       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0201 09:19:14.882473       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0201 09:19:14.887939       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0201 09:19:14.888084       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-518837_177741e2-14b8-4853-a6ca-cca9b8bb77fd!
	I0201 09:19:14.888381       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb47b0e9-89de-4c3a-bb91-4df1c04c0aeb", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-518837_177741e2-14b8-4853-a6ca-cca9b8bb77fd became leader
	I0201 09:19:14.989128       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-518837_177741e2-14b8-4853-a6ca-cca9b8bb77fd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-518837 -n ingress-addon-legacy-518837
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-518837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (182.42s)

                                                
                                    

Test pass (284/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 19.03
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.22
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 15.76
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.78
18 TestDownloadOnly/v1.28.4/DeleteAll 0.67
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.4
21 TestDownloadOnly/v1.29.0-rc.2/json-events 20.92
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.24
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.15
29 TestDownloadOnlyKic 1.38
30 TestBinaryMirror 0.77
31 TestOffline 53.64
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 131.74
38 TestAddons/parallel/Registry 19.45
40 TestAddons/parallel/InspektorGadget 11.77
41 TestAddons/parallel/MetricsServer 5.69
42 TestAddons/parallel/HelmTiller 11.13
44 TestAddons/parallel/CSI 97.84
45 TestAddons/parallel/Headlamp 13.25
46 TestAddons/parallel/CloudSpanner 5.55
47 TestAddons/parallel/LocalPath 55.45
48 TestAddons/parallel/NvidiaDevicePlugin 5.67
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 12.17
54 TestCertOptions 25.99
55 TestCertExpiration 226.39
57 TestForceSystemdFlag 28.56
58 TestForceSystemdEnv 38.19
60 TestKVMDriverInstallOrUpdate 4.58
64 TestErrorSpam/setup 21.78
65 TestErrorSpam/start 0.66
66 TestErrorSpam/status 0.95
67 TestErrorSpam/pause 1.59
68 TestErrorSpam/unpause 1.61
69 TestErrorSpam/stop 1.43
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 41.07
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 34.76
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.06
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.76
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
89 TestFunctional/serial/ExtraConfig 31.85
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.43
92 TestFunctional/serial/LogsFileCmd 1.45
93 TestFunctional/serial/InvalidService 4.54
95 TestFunctional/parallel/ConfigCmd 0.47
96 TestFunctional/parallel/DashboardCmd 19.48
97 TestFunctional/parallel/DryRun 0.45
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 1.05
103 TestFunctional/parallel/ServiceCmdConnect 7.71
104 TestFunctional/parallel/AddonsCmd 0.17
105 TestFunctional/parallel/PersistentVolumeClaim 37.87
107 TestFunctional/parallel/SSHCmd 0.58
108 TestFunctional/parallel/CpCmd 2.18
109 TestFunctional/parallel/MySQL 23.79
110 TestFunctional/parallel/FileSync 0.31
111 TestFunctional/parallel/CertSync 2.21
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
119 TestFunctional/parallel/License 0.65
120 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
124 TestFunctional/parallel/Version/short 0.07
125 TestFunctional/parallel/Version/components 1.23
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.12
131 TestFunctional/parallel/ImageCommands/Setup 1.91
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.26
141 TestFunctional/parallel/ServiceCmd/List 0.54
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
144 TestFunctional/parallel/ImageCommands/ImageRemove 1.03
145 TestFunctional/parallel/ServiceCmd/Format 0.45
146 TestFunctional/parallel/ServiceCmd/URL 0.4
149 TestFunctional/parallel/MountCmd/any-port 13.11
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
152 TestFunctional/parallel/ProfileCmd/profile_list 0.48
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.57
159 TestFunctional/parallel/MountCmd/specific-port 1.82
160 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 86.95
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.91
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.6
174 TestJSONOutput/start/Command 41.22
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.7
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.62
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.81
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.24
199 TestKicCustomNetwork/create_custom_network 44.1
200 TestKicCustomNetwork/use_default_bridge_network 26.76
201 TestKicExistingNetwork 28.84
202 TestKicCustomSubnet 25.65
203 TestKicStaticIP 24.87
204 TestMainNoArgs 0.06
205 TestMinikubeProfile 57.49
208 TestMountStart/serial/StartWithMountFirst 5.88
209 TestMountStart/serial/VerifyMountFirst 0.27
210 TestMountStart/serial/StartWithMountSecond 8.67
211 TestMountStart/serial/VerifyMountSecond 0.28
212 TestMountStart/serial/DeleteFirst 1.63
213 TestMountStart/serial/VerifyMountPostDelete 0.28
214 TestMountStart/serial/Stop 1.2
215 TestMountStart/serial/RestartStopped 7.52
216 TestMountStart/serial/VerifyMountPostStop 0.28
219 TestMultiNode/serial/FreshStart2Nodes 60.43
220 TestMultiNode/serial/DeployApp2Nodes 4.94
221 TestMultiNode/serial/PingHostFrom2Pods 0.83
222 TestMultiNode/serial/AddNode 35.66
223 TestMultiNode/serial/MultiNodeLabels 0.07
224 TestMultiNode/serial/ProfileList 0.29
225 TestMultiNode/serial/CopyFile 9.88
226 TestMultiNode/serial/StopNode 2.16
227 TestMultiNode/serial/StartAfterStop 11.85
228 TestMultiNode/serial/RestartKeepsNodes 113.04
229 TestMultiNode/serial/DeleteNode 4.78
230 TestMultiNode/serial/StopMultiNode 23.77
231 TestMultiNode/serial/RestartMultiNode 74.22
232 TestMultiNode/serial/ValidateNameConflict 24.11
237 TestPreload 146.98
239 TestScheduledStopUnix 100.1
242 TestInsufficientStorage 10.67
243 TestRunningBinaryUpgrade 95.02
245 TestKubernetesUpgrade 352.61
246 TestMissingContainerUpgrade 171.87
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
249 TestNoKubernetes/serial/StartWithK8s 37.93
250 TestNoKubernetes/serial/StartWithStopK8s 12.47
251 TestNoKubernetes/serial/Start 8.13
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
253 TestNoKubernetes/serial/ProfileList 0.73
254 TestNoKubernetes/serial/Stop 1.21
255 TestNoKubernetes/serial/StartNoArgs 10.14
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
264 TestStoppedBinaryUpgrade/Setup 2.25
265 TestStoppedBinaryUpgrade/Upgrade 58.36
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.89
268 TestPause/serial/Start 56.84
269 TestPause/serial/SecondStartNoReconfiguration 31.22
277 TestNetworkPlugins/group/false 3.87
281 TestPause/serial/Pause 0.73
282 TestPause/serial/VerifyStatus 0.34
283 TestPause/serial/Unpause 0.67
284 TestPause/serial/PauseAgain 0.88
285 TestPause/serial/DeletePaused 2.75
286 TestPause/serial/VerifyDeletedResources 19.21
288 TestStartStop/group/old-k8s-version/serial/FirstStart 114.75
290 TestStartStop/group/no-preload/serial/FirstStart 69.38
291 TestStartStop/group/no-preload/serial/DeployApp 9.26
292 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
293 TestStartStop/group/no-preload/serial/Stop 11.88
294 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
295 TestStartStop/group/no-preload/serial/SecondStart 341.67
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.43
298 TestStartStop/group/embed-certs/serial/FirstStart 46.01
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
300 TestStartStop/group/old-k8s-version/serial/Stop 12.03
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
302 TestStartStop/group/old-k8s-version/serial/SecondStart 436.29
303 TestStartStop/group/embed-certs/serial/DeployApp 11.29
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.48
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
307 TestStartStop/group/embed-certs/serial/Stop 11.98
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
309 TestStartStop/group/embed-certs/serial/SecondStart 337.09
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.88
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 344.15
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/no-preload/serial/Pause 2.89
320 TestStartStop/group/newest-cni/serial/FirstStart 39.85
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
323 TestStartStop/group/newest-cni/serial/Stop 1.28
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
325 TestStartStop/group/newest-cni/serial/SecondStart 27.03
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.01
327 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
330 TestStartStop/group/newest-cni/serial/Pause 2.89
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
332 TestNetworkPlugins/group/auto/Start 46.19
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
334 TestStartStop/group/embed-certs/serial/Pause 3.06
335 TestNetworkPlugins/group/kindnet/Start 38.67
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
337 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
340 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
341 TestStartStop/group/old-k8s-version/serial/Pause 3.02
342 TestNetworkPlugins/group/auto/KubeletFlags 0.3
343 TestNetworkPlugins/group/auto/NetCatPod 10.21
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
346 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.17
347 TestNetworkPlugins/group/calico/Start 68.1
348 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
349 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
350 TestNetworkPlugins/group/auto/DNS 0.15
351 TestNetworkPlugins/group/auto/Localhost 0.15
352 TestNetworkPlugins/group/auto/HairPin 0.15
353 TestNetworkPlugins/group/custom-flannel/Start 61.84
354 TestNetworkPlugins/group/kindnet/DNS 0.18
355 TestNetworkPlugins/group/kindnet/Localhost 0.13
356 TestNetworkPlugins/group/kindnet/HairPin 0.14
357 TestNetworkPlugins/group/enable-default-cni/Start 46.09
358 TestNetworkPlugins/group/flannel/Start 62.82
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
362 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
363 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
364 TestNetworkPlugins/group/calico/KubeletFlags 0.41
365 TestNetworkPlugins/group/calico/NetCatPod 10.28
366 TestNetworkPlugins/group/custom-flannel/DNS 0.15
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
369 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
371 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
372 TestNetworkPlugins/group/calico/DNS 0.15
373 TestNetworkPlugins/group/calico/Localhost 0.11
374 TestNetworkPlugins/group/calico/HairPin 0.13
375 TestNetworkPlugins/group/flannel/ControllerPod 6.14
376 TestNetworkPlugins/group/bridge/Start 47.6
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
378 TestNetworkPlugins/group/flannel/NetCatPod 11.28
379 TestNetworkPlugins/group/flannel/DNS 0.14
380 TestNetworkPlugins/group/flannel/Localhost 0.11
381 TestNetworkPlugins/group/flannel/HairPin 0.11
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
383 TestNetworkPlugins/group/bridge/NetCatPod 10.18
384 TestNetworkPlugins/group/bridge/DNS 0.13
385 TestNetworkPlugins/group/bridge/Localhost 0.1
386 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.16.0/json-events (19.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-599199 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-599199 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (19.031818088s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (19.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-599199
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-599199: exit status 85 (81.575953ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-599199 | jenkins | v1.32.0 | 01 Feb 24 09:07 UTC |          |
	|         | -p download-only-599199        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/01 09:07:50
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0201 09:07:50.487225  959752 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:07:50.487501  959752 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:07:50.487511  959752 out.go:309] Setting ErrFile to fd 2...
	I0201 09:07:50.487516  959752 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:07:50.487712  959752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	W0201 09:07:50.487846  959752 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18051-952908/.minikube/config/config.json: open /home/jenkins/minikube-integration/18051-952908/.minikube/config/config.json: no such file or directory
	I0201 09:07:50.488426  959752 out.go:303] Setting JSON to true
	I0201 09:07:50.489417  959752 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":57018,"bootTime":1706721453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0201 09:07:50.489485  959752 start.go:138] virtualization: kvm guest
	I0201 09:07:50.492153  959752 out.go:97] [download-only-599199] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0201 09:07:50.494029  959752 out.go:169] MINIKUBE_LOCATION=18051
	W0201 09:07:50.492298  959752 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball: no such file or directory
	I0201 09:07:50.492348  959752 notify.go:220] Checking for updates...
	I0201 09:07:50.497158  959752 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0201 09:07:50.498688  959752 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	I0201 09:07:50.500279  959752 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	I0201 09:07:50.502057  959752 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0201 09:07:50.504996  959752 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0201 09:07:50.505252  959752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0201 09:07:50.529868  959752 docker.go:122] docker version: linux-25.0.2:Docker Engine - Community
	I0201 09:07:50.529986  959752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:07:50.582920  959752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2024-02-01 09:07:50.573135786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:07:50.583024  959752 docker.go:295] overlay module found
	I0201 09:07:50.585037  959752 out.go:97] Using the docker driver based on user configuration
	I0201 09:07:50.585072  959752 start.go:298] selected driver: docker
	I0201 09:07:50.585082  959752 start.go:902] validating driver "docker" against <nil>
	I0201 09:07:50.585181  959752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:07:50.640140  959752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2024-02-01 09:07:50.630073253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:07:50.640341  959752 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0201 09:07:50.640775  959752 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0201 09:07:50.641010  959752 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0201 09:07:50.643104  959752 out.go:169] Using Docker driver with root privileges
	I0201 09:07:50.644862  959752 cni.go:84] Creating CNI manager for ""
	I0201 09:07:50.644894  959752 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0201 09:07:50.644911  959752 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0201 09:07:50.644924  959752 start_flags.go:321] config:
	{Name:download-only-599199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-599199 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0201 09:07:50.646763  959752 out.go:97] Starting control plane node download-only-599199 in cluster download-only-599199
	I0201 09:07:50.646796  959752 cache.go:121] Beginning downloading kic base image for docker with crio
	I0201 09:07:50.648454  959752 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0201 09:07:50.648499  959752 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0201 09:07:50.648627  959752 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0201 09:07:50.665553  959752 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0201 09:07:50.665728  959752 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0201 09:07:50.665807  959752 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0201 09:07:50.751816  959752 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0201 09:07:50.751858  959752 cache.go:56] Caching tarball of preloaded images
	I0201 09:07:50.752053  959752 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0201 09:07:50.754895  959752 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0201 09:07:50.754930  959752 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:07:50.859551  959752 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0201 09:08:03.135748  959752 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:08:03.135840  959752 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:08:03.927950  959752 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0201 09:08:04.057817  959752 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0201 09:08:04.058188  959752 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/download-only-599199/config.json ...
	I0201 09:08:04.058219  959752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/download-only-599199/config.json: {Name:mkbb0fc36825cdb565e4f71e4d3d62a384b5decf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:08:04.058447  959752 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0201 09:08:04.058630  959752 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/18051-952908/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-599199"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-599199
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (15.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-625877 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-625877 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.757619797s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (15.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-625877
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-625877: exit status 85 (783.381569ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-599199 | jenkins | v1.32.0 | 01 Feb 24 09:07 UTC |                     |
	|         | -p download-only-599199        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:08 UTC |
	| delete  | -p download-only-599199        | download-only-599199 | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:08 UTC |
	| start   | -o=json --download-only        | download-only-625877 | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC |                     |
	|         | -p download-only-625877        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/01 09:08:09
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0201 09:08:09.969076  960073 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:08:09.969368  960073 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:08:09.969379  960073 out.go:309] Setting ErrFile to fd 2...
	I0201 09:08:09.969384  960073 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:08:09.969637  960073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:08:09.970284  960073 out.go:303] Setting JSON to true
	I0201 09:08:09.971292  960073 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":57037,"bootTime":1706721453,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0201 09:08:09.971370  960073 start.go:138] virtualization: kvm guest
	I0201 09:08:09.973826  960073 out.go:97] [download-only-625877] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0201 09:08:09.975859  960073 out.go:169] MINIKUBE_LOCATION=18051
	I0201 09:08:09.973975  960073 notify.go:220] Checking for updates...
	I0201 09:08:09.978889  960073 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0201 09:08:09.980535  960073 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	I0201 09:08:09.982107  960073 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	I0201 09:08:09.983598  960073 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0201 09:08:09.986439  960073 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0201 09:08:09.986671  960073 driver.go:392] Setting default libvirt URI to qemu:///system
	I0201 09:08:10.010878  960073 docker.go:122] docker version: linux-25.0.2:Docker Engine - Community
	I0201 09:08:10.011006  960073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:08:10.063670  960073 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-02-01 09:08:10.053235149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:08:10.063942  960073 docker.go:295] overlay module found
	I0201 09:08:10.066236  960073 out.go:97] Using the docker driver based on user configuration
	I0201 09:08:10.066265  960073 start.go:298] selected driver: docker
	I0201 09:08:10.066271  960073 start.go:902] validating driver "docker" against <nil>
	I0201 09:08:10.066351  960073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:08:10.117750  960073 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-02-01 09:08:10.108154608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:08:10.117914  960073 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0201 09:08:10.118421  960073 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0201 09:08:10.118575  960073 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0201 09:08:10.120467  960073 out.go:169] Using Docker driver with root privileges
	I0201 09:08:10.121947  960073 cni.go:84] Creating CNI manager for ""
	I0201 09:08:10.121964  960073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0201 09:08:10.121975  960073 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0201 09:08:10.121987  960073 start_flags.go:321] config:
	{Name:download-only-625877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-625877 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0201 09:08:10.123656  960073 out.go:97] Starting control plane node download-only-625877 in cluster download-only-625877
	I0201 09:08:10.123686  960073 cache.go:121] Beginning downloading kic base image for docker with crio
	I0201 09:08:10.125270  960073 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0201 09:08:10.125316  960073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0201 09:08:10.125419  960073 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0201 09:08:10.141152  960073 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0201 09:08:10.141300  960073 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0201 09:08:10.141318  960073 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0201 09:08:10.141326  960073 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0201 09:08:10.141335  960073 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0201 09:08:10.221691  960073 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0201 09:08:10.221752  960073 cache.go:56] Caching tarball of preloaded images
	I0201 09:08:10.221907  960073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0201 09:08:10.223952  960073 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0201 09:08:10.223986  960073 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:08:10.326904  960073 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0201 09:08:23.653162  960073 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:08:23.653276  960073 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:08:24.595730  960073 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0201 09:08:24.596095  960073 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/download-only-625877/config.json ...
	I0201 09:08:24.596125  960073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/download-only-625877/config.json: {Name:mkc5afaaae3f30151213f531825bb4df028fff55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:08:24.596292  960073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0201 09:08:24.596421  960073 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18051-952908/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-625877"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-625877
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (20.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-057828 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-057828 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (20.922154811s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (20.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-057828
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-057828: exit status 85 (89.145291ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-599199 | jenkins | v1.32.0 | 01 Feb 24 09:07 UTC |                     |
	|         | -p download-only-599199           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:08 UTC |
	| delete  | -p download-only-599199           | download-only-599199 | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:08 UTC |
	| start   | -o=json --download-only           | download-only-625877 | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC |                     |
	|         | -p download-only-625877           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:08 UTC |
	| delete  | -p download-only-625877           | download-only-625877 | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC | 01 Feb 24 09:08 UTC |
	| start   | -o=json --download-only           | download-only-057828 | jenkins | v1.32.0 | 01 Feb 24 09:08 UTC |                     |
	|         | -p download-only-057828           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/01 09:08:27
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0201 09:08:27.580078  960386 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:08:27.580242  960386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:08:27.580253  960386 out.go:309] Setting ErrFile to fd 2...
	I0201 09:08:27.580258  960386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:08:27.580472  960386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:08:27.581074  960386 out.go:303] Setting JSON to true
	I0201 09:08:27.582104  960386 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":57055,"bootTime":1706721453,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0201 09:08:27.582174  960386 start.go:138] virtualization: kvm guest
	I0201 09:08:27.584257  960386 out.go:97] [download-only-057828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0201 09:08:27.585966  960386 out.go:169] MINIKUBE_LOCATION=18051
	I0201 09:08:27.584414  960386 notify.go:220] Checking for updates...
	I0201 09:08:27.588951  960386 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0201 09:08:27.590594  960386 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	I0201 09:08:27.592220  960386 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	I0201 09:08:27.593739  960386 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0201 09:08:27.596273  960386 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0201 09:08:27.596550  960386 driver.go:392] Setting default libvirt URI to qemu:///system
	I0201 09:08:27.619021  960386 docker.go:122] docker version: linux-25.0.2:Docker Engine - Community
	I0201 09:08:27.619164  960386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:08:27.671026  960386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-02-01 09:08:27.660930978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:08:27.671129  960386 docker.go:295] overlay module found
	I0201 09:08:27.672938  960386 out.go:97] Using the docker driver based on user configuration
	I0201 09:08:27.672959  960386 start.go:298] selected driver: docker
	I0201 09:08:27.672964  960386 start.go:902] validating driver "docker" against <nil>
	I0201 09:08:27.673042  960386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:08:27.731963  960386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-02-01 09:08:27.721144997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:08:27.732191  960386 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0201 09:08:27.732897  960386 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0201 09:08:27.733080  960386 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0201 09:08:27.735731  960386 out.go:169] Using Docker driver with root privileges
	I0201 09:08:27.738024  960386 cni.go:84] Creating CNI manager for ""
	I0201 09:08:27.738060  960386 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0201 09:08:27.738074  960386 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0201 09:08:27.738090  960386 start_flags.go:321] config:
	{Name:download-only-057828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-057828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0201 09:08:27.740364  960386 out.go:97] Starting control plane node download-only-057828 in cluster download-only-057828
	I0201 09:08:27.740407  960386 cache.go:121] Beginning downloading kic base image for docker with crio
	I0201 09:08:27.742172  960386 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0201 09:08:27.742225  960386 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0201 09:08:27.742267  960386 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0201 09:08:27.759278  960386 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0201 09:08:27.759429  960386 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0201 09:08:27.759460  960386 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0201 09:08:27.759468  960386 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0201 09:08:27.759482  960386 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0201 09:08:27.840737  960386 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0201 09:08:27.840779  960386 cache.go:56] Caching tarball of preloaded images
	I0201 09:08:27.840965  960386 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0201 09:08:27.843159  960386 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0201 09:08:27.843196  960386 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:08:27.961012  960386 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0201 09:08:41.845142  960386 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:08:41.845249  960386 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18051-952908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0201 09:08:42.670145  960386 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0201 09:08:42.670562  960386 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/download-only-057828/config.json ...
	I0201 09:08:42.670624  960386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/download-only-057828/config.json: {Name:mk04f64ffeb7b8bbdb9bee5fd71b1a0d56d7b00e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0201 09:08:42.670814  960386 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0201 09:08:42.670981  960386 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18051-952908/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-057828"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-057828
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-452662 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-452662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-452662
--- PASS: TestDownloadOnlyKic (1.38s)

                                                
                                    
x
+
TestBinaryMirror (0.77s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-807134 --alsologtostderr --binary-mirror http://127.0.0.1:34923 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-807134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-807134
--- PASS: TestBinaryMirror (0.77s)

                                                
                                    
x
+
TestOffline (53.64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-149120 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-149120 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (51.143613418s)
helpers_test.go:175: Cleaning up "offline-crio-149120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-149120
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-149120: (2.491530058s)
--- PASS: TestOffline (53.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-642352
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-642352: exit status 85 (69.696177ms)

                                                
                                                
-- stdout --
	* Profile "addons-642352" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-642352"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-642352
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-642352: exit status 85 (71.622323ms)

                                                
                                                
-- stdout --
	* Profile "addons-642352" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-642352"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (131.74s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-642352 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-642352 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m11.740446348s)
--- PASS: TestAddons/Setup (131.74s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 14.999925ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-s2xzz" [4dea1280-3766-46b5-b712-24e29ff33b38] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00468243s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tsccz" [5512787a-dabe-4019-aead-c68f8a431ce8] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003926711s
addons_test.go:340: (dbg) Run:  kubectl --context addons-642352 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-642352 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-642352 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.616585274s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 ip
2024/02/01 09:11:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.45s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hggz5" [75f00bc8-965e-47e9-993a-8bc99b5093cf] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003644633s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-642352
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-642352: (5.761316443s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 10.613235ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-4mrxs" [d1e198e3-e716-4091-b76d-458a065b8206] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004659184s
addons_test.go:415: (dbg) Run:  kubectl --context addons-642352 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.69s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.13s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.485991ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-q2zb9" [3d55d479-092d-4cdb-9344-110276a11056] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005009354s
addons_test.go:473: (dbg) Run:  kubectl --context addons-642352 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-642352 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.578401339s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (97.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 14.611457ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-642352 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-642352 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e7e3857f-722a-4ac1-9aa4-ba5bb04221f6] Pending
helpers_test.go:344: "task-pv-pod" [e7e3857f-722a-4ac1-9aa4-ba5bb04221f6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e7e3857f-722a-4ac1-9aa4-ba5bb04221f6] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003979962s
addons_test.go:584: (dbg) Run:  kubectl --context addons-642352 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-642352 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-642352 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-642352 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-642352 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-642352 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-642352 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [073087b8-0801-415e-87cd-ef21f310884a] Pending
helpers_test.go:344: "task-pv-pod-restore" [073087b8-0801-415e-87cd-ef21f310884a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [073087b8-0801-415e-87cd-ef21f310884a] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004191455s
addons_test.go:626: (dbg) Run:  kubectl --context addons-642352 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-642352 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-642352 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-642352 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.65629276s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (97.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-642352 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-642352 --alsologtostderr -v=1: (1.239549329s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-r8fgz" [d70430d2-ca33-4428-8e26-884ba78b5096] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-r8fgz" [d70430d2-ca33-4428-8e26-884ba78b5096] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-r8fgz" [d70430d2-ca33-4428-8e26-884ba78b5096] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-r8fgz" [d70430d2-ca33-4428-8e26-884ba78b5096] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004477055s
--- PASS: TestAddons/parallel/Headlamp (13.25s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-tgsb4" [bd9299b1-a961-4c7c-b2ec-85f5dfeb3bc6] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004185976s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-642352
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-642352 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-642352 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642352 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a6b506a5-781e-4255-b818-adc281d5f9fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a6b506a5-781e-4255-b818-adc281d5f9fe] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a6b506a5-781e-4255-b818-adc281d5f9fe] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004172003s
addons_test.go:891: (dbg) Run:  kubectl --context addons-642352 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 ssh "cat /opt/local-path-provisioner/pvc-5a4495e1-0e0a-490e-9234-87dcffee5021_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-642352 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-642352 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-642352 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-642352 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.525160001s)
--- PASS: TestAddons/parallel/LocalPath (55.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-p8cwv" [e416cbdf-6552-406b-8891-00782080893a] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004280532s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-642352
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-sh5hc" [47168b24-ed55-4c74-9c63-febaf5609b0c] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006404672s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-642352 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-642352 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-642352
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-642352: (11.870261578s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-642352
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-642352
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-642352
--- PASS: TestAddons/StoppedEnableDisable (12.17s)

                                                
                                    
x
+
TestCertOptions (25.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-464556 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-464556 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.432328395s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-464556 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-464556 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-464556 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-464556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-464556
E0201 09:42:00.360191  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-464556: (1.93883695s)
--- PASS: TestCertOptions (25.99s)

                                                
                                    
x
+
TestCertExpiration (226.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-446910 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-446910 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (28.75499194s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-446910 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-446910 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.295767838s)
helpers_test.go:175: Cleaning up "cert-expiration-446910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-446910
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-446910: (2.342261434s)
--- PASS: TestCertExpiration (226.39s)

                                                
                                    
x
+
TestForceSystemdFlag (28.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-024019 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0201 09:41:00.003967  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:41:03.362658  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-024019 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.908732441s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-024019 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-024019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-024019
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-024019: (2.354854961s)
--- PASS: TestForceSystemdFlag (28.56s)

                                                
                                    
x
+
TestForceSystemdEnv (38.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-196356 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-196356 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.804609883s)
helpers_test.go:175: Cleaning up "force-systemd-env-196356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-196356
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-196356: (5.380777486s)
--- PASS: TestForceSystemdEnv (38.19s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.58s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.58s)

                                                
                                    
x
+
TestErrorSpam/setup (21.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-163310 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-163310 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-163310 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-163310 --driver=docker  --container-runtime=crio: (21.775125823s)
--- PASS: TestErrorSpam/setup (21.78s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 stop: (1.210532335s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163310 --log_dir /tmp/nospam-163310 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/18051-952908/.minikube/files/etc/test/nested/copy/959740/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-571055 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-571055 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.070935952s)
--- PASS: TestFunctional/serial/StartWithProxy (41.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-571055 --alsologtostderr -v=8
E0201 09:16:03.362638  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:03.368564  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:03.378826  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:03.399113  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:03.439390  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:03.519795  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:03.680208  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:04.000737  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:04.640929  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:05.921574  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:08.482118  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-571055 --alsologtostderr -v=8: (34.761710576s)
functional_test.go:659: soft start took 34.762445296s for "functional-571055" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-571055 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 cache add registry.k8s.io/pause:latest
E0201 09:16:13.602718  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.989699ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 kubectl -- --context functional-571055 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-571055 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-571055 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0201 09:16:23.842907  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:16:44.323611  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-571055 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.845925714s)
functional_test.go:757: restart took 31.846099313s for "functional-571055" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.85s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-571055 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-571055 logs: (1.428449828s)
--- PASS: TestFunctional/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 logs --file /tmp/TestFunctionalserialLogsFileCmd2805313010/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-571055 logs --file /tmp/TestFunctionalserialLogsFileCmd2805313010/001/logs.txt: (1.45081309s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-571055 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-571055
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-571055: exit status 115 (362.509823ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32568 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-571055 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-571055 delete -f testdata/invalidsvc.yaml: (1.000145776s)
--- PASS: TestFunctional/serial/InvalidService (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 config get cpus: exit status 14 (83.932763ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 config get cpus: exit status 14 (83.564769ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-571055 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-571055 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 996462: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-571055 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-571055 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (198.009146ms)

                                                
                                                
-- stdout --
	* [functional-571055] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:17:27.594581  995894 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:27.594865  995894 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:27.594876  995894 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:27.594883  995894 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:27.595087  995894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:27.595678  995894 out.go:303] Setting JSON to false
	I0201 09:17:27.597092  995894 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":57595,"bootTime":1706721453,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0201 09:17:27.597216  995894 start.go:138] virtualization: kvm guest
	I0201 09:17:27.600711  995894 out.go:177] * [functional-571055] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0201 09:17:27.602090  995894 out.go:177]   - MINIKUBE_LOCATION=18051
	I0201 09:17:27.602097  995894 notify.go:220] Checking for updates...
	I0201 09:17:27.603410  995894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0201 09:17:27.604690  995894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	I0201 09:17:27.606003  995894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	I0201 09:17:27.607326  995894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0201 09:17:27.608666  995894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0201 09:17:27.610281  995894 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:27.610822  995894 driver.go:392] Setting default libvirt URI to qemu:///system
	I0201 09:17:27.635969  995894 docker.go:122] docker version: linux-25.0.2:Docker Engine - Community
	I0201 09:17:27.636143  995894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:17:27.700439  995894 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:59 SystemTime:2024-02-01 09:17:27.690348421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:17:27.700660  995894 docker.go:295] overlay module found
	I0201 09:17:27.703159  995894 out.go:177] * Using the docker driver based on existing profile
	I0201 09:17:27.704729  995894 start.go:298] selected driver: docker
	I0201 09:17:27.704749  995894 start.go:902] validating driver "docker" against &{Name:functional-571055 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-571055 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0201 09:17:27.704830  995894 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0201 09:17:27.706888  995894 out.go:177] 
	W0201 09:17:27.708366  995894 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0201 09:17:27.710143  995894 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-571055 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-571055 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-571055 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (180.48784ms)

                                                
                                                
-- stdout --
	* [functional-571055] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:17:23.021930  994588 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:17:23.022046  994588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:23.022059  994588 out.go:309] Setting ErrFile to fd 2...
	I0201 09:17:23.022066  994588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:17:23.022407  994588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:17:23.023013  994588 out.go:303] Setting JSON to false
	I0201 09:17:23.024110  994588 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":57590,"bootTime":1706721453,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0201 09:17:23.024189  994588 start.go:138] virtualization: kvm guest
	I0201 09:17:23.026878  994588 out.go:177] * [functional-571055] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0201 09:17:23.028563  994588 out.go:177]   - MINIKUBE_LOCATION=18051
	I0201 09:17:23.028630  994588 notify.go:220] Checking for updates...
	I0201 09:17:23.030451  994588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0201 09:17:23.032124  994588 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	I0201 09:17:23.033777  994588 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	I0201 09:17:23.035435  994588 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0201 09:17:23.036819  994588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0201 09:17:23.038651  994588 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:17:23.039139  994588 driver.go:392] Setting default libvirt URI to qemu:///system
	I0201 09:17:23.064851  994588 docker.go:122] docker version: linux-25.0.2:Docker Engine - Community
	I0201 09:17:23.064983  994588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:17:23.124645  994588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:59 SystemTime:2024-02-01 09:17:23.114323352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:17:23.124750  994588 docker.go:295] overlay module found
	I0201 09:17:23.126768  994588 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0201 09:17:23.128424  994588 start.go:298] selected driver: docker
	I0201 09:17:23.128439  994588 start.go:902] validating driver "docker" against &{Name:functional-571055 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-571055 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0201 09:17:23.128554  994588 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0201 09:17:23.130766  994588 out.go:177] 
	W0201 09:17:23.132037  994588 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0201 09:17:23.133412  994588 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
E0201 09:17:25.284828  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-571055 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-571055 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-cdfj7" [1102db97-b491-451e-9ca4-5c3236296f9d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-cdfj7" [1102db97-b491-451e-9ca4-5c3236296f9d] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004087339s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31095
functional_test.go:1674: http://192.168.49.2:31095: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-cdfj7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31095
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a0aa56bc-b174-4f58-95fa-5f8904aba07f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00550694s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-571055 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-571055 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-571055 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-571055 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-571055 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [adfbfc04-d420-4930-840d-8153b8dc9466] Pending
helpers_test.go:344: "sp-pod" [adfbfc04-d420-4930-840d-8153b8dc9466] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [adfbfc04-d420-4930-840d-8153b8dc9466] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004373867s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-571055 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-571055 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-571055 delete -f testdata/storage-provisioner/pod.yaml: (1.161535051s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-571055 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [45b092ca-abac-4e4f-a85f-2e7628ae62ba] Pending
helpers_test.go:344: "sp-pod" [45b092ca-abac-4e4f-a85f-2e7628ae62ba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [45b092ca-abac-4e4f-a85f-2e7628ae62ba] Running
2024/02/01 09:17:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003979002s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-571055 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh -n functional-571055 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 cp functional-571055:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2395468865/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh -n functional-571055 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh -n functional-571055 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-571055 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-cqmkp" [1f68e195-3108-4742-a829-702911a01c7c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-cqmkp" [1f68e195-3108-4742-a829-702911a01c7c] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003531288s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-571055 exec mysql-859648c796-cqmkp -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-571055 exec mysql-859648c796-cqmkp -- mysql -ppassword -e "show databases;": exit status 1 (110.276153ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-571055 exec mysql-859648c796-cqmkp -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-571055 exec mysql-859648c796-cqmkp -- mysql -ppassword -e "show databases;": exit status 1 (114.423664ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-571055 exec mysql-859648c796-cqmkp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/959740/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo cat /etc/test/nested/copy/959740/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/959740.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo cat /etc/ssl/certs/959740.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/959740.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo cat /usr/share/ca-certificates/959740.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/9597402.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo cat /etc/ssl/certs/9597402.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/9597402.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo cat /usr/share/ca-certificates/9597402.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-571055 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 ssh "sudo systemctl is-active docker": exit status 1 (388.403198ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 ssh "sudo systemctl is-active containerd": exit status 1 (355.161253ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-571055 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-571055 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-gcg8j" [bec6df28-f5e9-4594-9ea4-f2e617581b74] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-gcg8j" [bec6df28-f5e9-4594-9ea4-f2e617581b74] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004311304s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-amd64 -p functional-571055 version -o=json --components: (1.230462591s)
--- PASS: TestFunctional/parallel/Version/components (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-571055 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-571055 image ls --format short --alsologtostderr:
I0201 09:17:29.849805  996501 out.go:296] Setting OutFile to fd 1 ...
I0201 09:17:29.849976  996501 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0201 09:17:29.849986  996501 out.go:309] Setting ErrFile to fd 2...
I0201 09:17:29.849993  996501 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0201 09:17:29.850201  996501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
I0201 09:17:29.850843  996501 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0201 09:17:29.850976  996501 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0201 09:17:29.851385  996501 cli_runner.go:164] Run: docker container inspect functional-571055 --format={{.State.Status}}
I0201 09:17:29.870777  996501 ssh_runner.go:195] Run: systemctl --version
I0201 09:17:29.870842  996501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-571055
I0201 09:17:29.888834  996501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/functional-571055/id_rsa Username:docker}
I0201 09:17:29.982761  996501 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-571055 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | alpine             | 2b70e4aaac6b5 | 44.4MB |
| docker.io/library/nginx                 | latest             | a8758716bb6aa | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-571055 image ls --format table --alsologtostderr:
I0201 09:17:30.363787  996627 out.go:296] Setting OutFile to fd 1 ...
I0201 09:17:30.363929  996627 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0201 09:17:30.363939  996627 out.go:309] Setting ErrFile to fd 2...
I0201 09:17:30.363943  996627 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0201 09:17:30.364147  996627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
I0201 09:17:30.364752  996627 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0201 09:17:30.364860  996627 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0201 09:17:30.365272  996627 cli_runner.go:164] Run: docker container inspect functional-571055 --format={{.State.Status}}
I0201 09:17:30.385722  996627 ssh_runner.go:195] Run: systemctl --version
I0201 09:17:30.385792  996627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-571055
I0201 09:17:30.403853  996627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/functional-571055/id_rsa Username:docker}
I0201 09:17:30.499347  996627 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-571055 image ls --format json --alsologtostderr:
[{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e3
6b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"2b70e4aaac6b5370bf3a556f5e13156692351696dd5d7c5530d117aa21772748","repoDigests":["docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da","docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44407883"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-schedul
er@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f
35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docke
r.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-571055 image ls --format json --alsologtostderr:
I0201 09:17:30.122366  996545 out.go:296] Setting OutFile to fd 1 ...
I0201 09:17:30.122528  996545 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0201 09:17:30.122538  996545 out.go:309] Setting ErrFile to fd 2...
I0201 09:17:30.122543  996545 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0201 09:17:30.122742  996545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
I0201 09:17:30.123377  996545 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0201 09:17:30.123482  996545 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0201 09:17:30.123885  996545 cli_runner.go:164] Run: docker container inspect functional-571055 --format={{.State.Status}}
I0201 09:17:30.142708  996545 ssh_runner.go:195] Run: systemctl --version
I0201 09:17:30.142761  996545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-571055
I0201 09:17:30.161187  996545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/functional-571055/id_rsa Username:docker}
I0201 09:17:30.254929  996545 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-571055 image ls --format yaml --alsologtostderr:
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 2b70e4aaac6b5370bf3a556f5e13156692351696dd5d7c5530d117aa21772748
repoDigests:
- docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da
- docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027
repoTags:
- docker.io/library/nginx:alpine
size: "44407883"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-571055 image ls --format yaml --alsologtostderr:
I0201 09:17:30.606996  996784 out.go:296] Setting OutFile to fd 1 ...
I0201 09:17:30.607154  996784 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0201 09:17:30.607165  996784 out.go:309] Setting ErrFile to fd 2...
I0201 09:17:30.607171  996784 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0201 09:17:30.607389  996784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
I0201 09:17:30.608121  996784 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0201 09:17:30.608232  996784 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0201 09:17:30.608682  996784 cli_runner.go:164] Run: docker container inspect functional-571055 --format={{.State.Status}}
I0201 09:17:30.625989  996784 ssh_runner.go:195] Run: systemctl --version
I0201 09:17:30.626036  996784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-571055
I0201 09:17:30.642618  996784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/functional-571055/id_rsa Username:docker}
I0201 09:17:30.734986  996784 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 ssh pgrep buildkitd: exit status 1 (274.533552ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image build -t localhost/my-image:functional-571055 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-571055 image build -t localhost/my-image:functional-571055 testdata/build --alsologtostderr: (2.611135277s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-571055 image build -t localhost/my-image:functional-571055 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> faca517ef06
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-571055
--> 4abe18aa0e1
Successfully tagged localhost/my-image:functional-571055
4abe18aa0e141c02730ad540d6d17b22d20bb65bb2aa669b9e56cb60945abc0d
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-571055 image build -t localhost/my-image:functional-571055 testdata/build --alsologtostderr:
I0201 09:17:31.114022  996927 out.go:296] Setting OutFile to fd 1 ...
I0201 09:17:31.114184  996927 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0201 09:17:31.114203  996927 out.go:309] Setting ErrFile to fd 2...
I0201 09:17:31.114211  996927 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0201 09:17:31.114504  996927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
I0201 09:17:31.115179  996927 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0201 09:17:31.115752  996927 config.go:182] Loaded profile config "functional-571055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0201 09:17:31.116278  996927 cli_runner.go:164] Run: docker container inspect functional-571055 --format={{.State.Status}}
I0201 09:17:31.135251  996927 ssh_runner.go:195] Run: systemctl --version
I0201 09:17:31.135324  996927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-571055
I0201 09:17:31.152749  996927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/functional-571055/id_rsa Username:docker}
I0201 09:17:31.251296  996927 build_images.go:151] Building image from path: /tmp/build.3575165993.tar
I0201 09:17:31.251372  996927 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0201 09:17:31.260819  996927 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3575165993.tar
I0201 09:17:31.265433  996927 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3575165993.tar: stat -c "%s %y" /var/lib/minikube/build/build.3575165993.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3575165993.tar': No such file or directory
I0201 09:17:31.265466  996927 ssh_runner.go:362] scp /tmp/build.3575165993.tar --> /var/lib/minikube/build/build.3575165993.tar (3072 bytes)
I0201 09:17:31.297201  996927 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3575165993
I0201 09:17:31.340233  996927 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3575165993 -xf /var/lib/minikube/build/build.3575165993.tar
I0201 09:17:31.350812  996927 crio.go:297] Building image: /var/lib/minikube/build/build.3575165993
I0201 09:17:31.350895  996927 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-571055 /var/lib/minikube/build/build.3575165993 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0201 09:17:33.639735  996927 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-571055 /var/lib/minikube/build/build.3575165993 --cgroup-manager=cgroupfs: (2.288809973s)
I0201 09:17:33.639801  996927 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3575165993
I0201 09:17:33.648885  996927 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3575165993.tar
I0201 09:17:33.657086  996927 build_images.go:207] Built localhost/my-image:functional-571055 from /tmp/build.3575165993.tar
I0201 09:17:33.657123  996927 build_images.go:123] succeeded building to: functional-571055
I0201 09:17:33.657130  996927 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.889672141s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-571055
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-571055 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-571055 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-571055 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 992542: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-571055 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-571055 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-571055 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [37008b08-d94e-4d99-bb2b-6f0f541ae1c3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [37008b08-d94e-4d99-bb2b-6f0f541ae1c3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 22.004403996s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 service list -o json
functional_test.go:1493: Took "594.661733ms" to run "out/minikube-linux-amd64 -p functional-571055 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32540
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image rm gcr.io/google-containers/addon-resizer:functional-571055 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32540
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdany-port736955691/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1706779043141115705" to /tmp/TestFunctionalparallelMountCmdany-port736955691/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1706779043141115705" to /tmp/TestFunctionalparallelMountCmdany-port736955691/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1706779043141115705" to /tmp/TestFunctionalparallelMountCmdany-port736955691/001/test-1706779043141115705
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (296.718909ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  1 09:17 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  1 09:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  1 09:17 test-1706779043141115705
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh cat /mount-9p/test-1706779043141115705
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-571055 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [44c901cd-fb0b-427d-b4b8-d5cf9ce4035f] Pending
helpers_test.go:344: "busybox-mount" [44c901cd-fb0b-427d-b4b8-d5cf9ce4035f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [44c901cd-fb0b-427d-b4b8-d5cf9ce4035f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [44c901cd-fb0b-427d-b4b8-d5cf9ce4035f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.003478308s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-571055 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdany-port736955691/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-571055 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "409.148675ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "69.886442ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.30.125 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-571055 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "492.855602ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "77.422423ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdspecific-port1935960855/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (315.977644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdspecific-port1935960855/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 ssh "sudo umount -f /mount-9p": exit status 1 (273.570029ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-571055 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdspecific-port1935960855/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560295867/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560295867/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560295867/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T" /mount1: exit status 1 (326.018293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-571055 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-571055 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560295867/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560295867/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-571055 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560295867/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-571055
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-571055
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-571055
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (86.95s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-518837 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0201 09:18:47.205405  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-518837 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m26.951296219s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (86.95s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.91s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-518837 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-518837 addons enable ingress --alsologtostderr -v=5: (14.907199514s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.91s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-518837 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-930200 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0201 09:23:22.282557  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-930200 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (41.221449509s)
--- PASS: TestJSONOutput/start/Command (41.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-930200 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-930200 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-930200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-930200 --output=json --user=testUser: (5.807376832s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-714932 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-714932 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.706636ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"06b8613f-ee8a-4e48-899d-c204bb8d1364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-714932] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a3cd1f3-b686-401b-a0d7-d29129649b52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18051"}}
	{"specversion":"1.0","id":"6333b377-8e90-4b0c-b92b-8c87cc94883d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d7222374-13f5-4624-9b24-ba61f02f20d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig"}}
	{"specversion":"1.0","id":"ad6103f7-fc67-49e5-a2e2-ce6c78ea3b2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube"}}
	{"specversion":"1.0","id":"e194ef7a-2de1-4f78-8537-64c8edc7ff4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7bb51379-e3cf-4c0f-8753-4be969d99435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d6431682-9e61-4473-87e2-4809b9b25656","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-714932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-714932
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-401272 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-401272 --network=: (41.901884641s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-401272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-401272
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-401272: (2.176940505s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-024379 --network=bridge
E0201 09:24:36.958581  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:36.963984  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:36.974337  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:36.994725  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:37.035123  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:37.115481  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:37.275849  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:37.596455  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:38.237444  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:39.517770  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:42.078595  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:44.203675  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-024379 --network=bridge: (24.857446243s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-024379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-024379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-024379: (1.88603591s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.76s)

                                                
                                    
x
+
TestKicExistingNetwork (28.84s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-191014 --network=existing-network
E0201 09:24:47.198865  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:24:57.439652  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-191014 --network=existing-network: (26.77623062s)
helpers_test.go:175: Cleaning up "existing-network-191014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-191014
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-191014: (1.92235426s)
--- PASS: TestKicExistingNetwork (28.84s)

                                                
                                    
x
+
TestKicCustomSubnet (25.65s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-317997 --subnet=192.168.60.0/24
E0201 09:25:17.920632  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-317997 --subnet=192.168.60.0/24: (23.468318073s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-317997 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-317997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-317997
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-317997: (2.163989528s)
--- PASS: TestKicCustomSubnet (25.65s)

                                                
                                    
x
+
TestKicStaticIP (24.87s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-172642 --static-ip=192.168.200.200
E0201 09:25:58.882058  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:26:03.362345  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-172642 --static-ip=192.168.200.200: (22.61446785s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-172642 ip
helpers_test.go:175: Cleaning up "static-ip-172642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-172642
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-172642: (2.112905976s)
--- PASS: TestKicStaticIP (24.87s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (57.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-105865 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-105865 --driver=docker  --container-runtime=crio: (25.49632413s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-107882 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-107882 --driver=docker  --container-runtime=crio: (26.76382155s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-105865
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-107882
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-107882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-107882
E0201 09:27:00.359955  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-107882: (1.886493697s)
helpers_test.go:175: Cleaning up "first-105865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-105865
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-105865: (2.227311619s)
--- PASS: TestMinikubeProfile (57.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-902145 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-902145 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.8801229s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-902145 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-924141 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-924141 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.666817564s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-924141 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-902145 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-902145 --alsologtostderr -v=5: (1.629407934s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-924141 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-924141
E0201 09:27:20.803001  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-924141: (1.195648106s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-924141
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-924141: (6.520194625s)
E0201 09:27:28.044466  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (7.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-924141 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-825335 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-825335 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (59.945837601s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-825335 -- rollout status deployment/busybox: (3.367319202s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- exec busybox-5b5d89c9d6-9qt7x -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- exec busybox-5b5d89c9d6-v7847 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- exec busybox-5b5d89c9d6-9qt7x -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- exec busybox-5b5d89c9d6-v7847 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- exec busybox-5b5d89c9d6-9qt7x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- exec busybox-5b5d89c9d6-v7847 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- exec busybox-5b5d89c9d6-9qt7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- exec busybox-5b5d89c9d6-9qt7x -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- exec busybox-5b5d89c9d6-v7847 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825335 -- exec busybox-5b5d89c9d6-v7847 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-825335 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-825335 -v 3 --alsologtostderr: (35.032182763s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.66s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-825335 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp testdata/cp-test.txt multinode-825335:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp multinode-825335:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4082529216/001/cp-test_multinode-825335.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp multinode-825335:/home/docker/cp-test.txt multinode-825335-m02:/home/docker/cp-test_multinode-825335_multinode-825335-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m02 "sudo cat /home/docker/cp-test_multinode-825335_multinode-825335-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp multinode-825335:/home/docker/cp-test.txt multinode-825335-m03:/home/docker/cp-test_multinode-825335_multinode-825335-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m03 "sudo cat /home/docker/cp-test_multinode-825335_multinode-825335-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp testdata/cp-test.txt multinode-825335-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp multinode-825335-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4082529216/001/cp-test_multinode-825335-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp multinode-825335-m02:/home/docker/cp-test.txt multinode-825335:/home/docker/cp-test_multinode-825335-m02_multinode-825335.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335 "sudo cat /home/docker/cp-test_multinode-825335-m02_multinode-825335.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp multinode-825335-m02:/home/docker/cp-test.txt multinode-825335-m03:/home/docker/cp-test_multinode-825335-m02_multinode-825335-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m03 "sudo cat /home/docker/cp-test_multinode-825335-m02_multinode-825335-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp testdata/cp-test.txt multinode-825335-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp multinode-825335-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4082529216/001/cp-test_multinode-825335-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp multinode-825335-m03:/home/docker/cp-test.txt multinode-825335:/home/docker/cp-test_multinode-825335-m03_multinode-825335.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335 "sudo cat /home/docker/cp-test_multinode-825335-m03_multinode-825335.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 cp multinode-825335-m03:/home/docker/cp-test.txt multinode-825335-m02:/home/docker/cp-test_multinode-825335-m03_multinode-825335-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 ssh -n multinode-825335-m02 "sudo cat /home/docker/cp-test_multinode-825335-m03_multinode-825335-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-825335 node stop m03: (1.188445371s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-825335 status: exit status 7 (488.825424ms)

                                                
                                                
-- stdout --
	multinode-825335
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-825335-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-825335-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-825335 status --alsologtostderr: exit status 7 (483.093088ms)

                                                
                                                
-- stdout --
	multinode-825335
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-825335-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-825335-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:29:24.939652 1057495 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:29:24.939778 1057495 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:29:24.939788 1057495 out.go:309] Setting ErrFile to fd 2...
	I0201 09:29:24.939792 1057495 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:29:24.940000 1057495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:29:24.940176 1057495 out.go:303] Setting JSON to false
	I0201 09:29:24.940217 1057495 mustload.go:65] Loading cluster: multinode-825335
	I0201 09:29:24.940267 1057495 notify.go:220] Checking for updates...
	I0201 09:29:24.940640 1057495 config.go:182] Loaded profile config "multinode-825335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:29:24.940659 1057495 status.go:255] checking status of multinode-825335 ...
	I0201 09:29:24.941125 1057495 cli_runner.go:164] Run: docker container inspect multinode-825335 --format={{.State.Status}}
	I0201 09:29:24.959662 1057495 status.go:330] multinode-825335 host status = "Running" (err=<nil>)
	I0201 09:29:24.959702 1057495 host.go:66] Checking if "multinode-825335" exists ...
	I0201 09:29:24.959986 1057495 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-825335
	I0201 09:29:24.976534 1057495 host.go:66] Checking if "multinode-825335" exists ...
	I0201 09:29:24.976847 1057495 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0201 09:29:24.976891 1057495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-825335
	I0201 09:29:24.996042 1057495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34106 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/multinode-825335/id_rsa Username:docker}
	I0201 09:29:25.087428 1057495 ssh_runner.go:195] Run: systemctl --version
	I0201 09:29:25.091378 1057495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0201 09:29:25.102021 1057495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:29:25.154433 1057495 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:68 SystemTime:2024-02-01 09:29:25.144267978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:29:25.155066 1057495 kubeconfig.go:92] found "multinode-825335" server: "https://192.168.58.2:8443"
	I0201 09:29:25.155095 1057495 api_server.go:166] Checking apiserver status ...
	I0201 09:29:25.155138 1057495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0201 09:29:25.166050 1057495 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup
	I0201 09:29:25.174776 1057495 api_server.go:182] apiserver freezer: "5:freezer:/docker/c596f21c90ed14c88ae9771c0222ed1e6f51ef5237faafd65b360a8e7cb76fe8/crio/crio-6e6070ce1ff990675c6cbc0c1df1ea27cf848bbaf544b890d6917a9c3dc23cb1"
	I0201 09:29:25.174857 1057495 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c596f21c90ed14c88ae9771c0222ed1e6f51ef5237faafd65b360a8e7cb76fe8/crio/crio-6e6070ce1ff990675c6cbc0c1df1ea27cf848bbaf544b890d6917a9c3dc23cb1/freezer.state
	I0201 09:29:25.182588 1057495 api_server.go:204] freezer state: "THAWED"
	I0201 09:29:25.182617 1057495 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0201 09:29:25.186804 1057495 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0201 09:29:25.186827 1057495 status.go:421] multinode-825335 apiserver status = Running (err=<nil>)
	I0201 09:29:25.186837 1057495 status.go:257] multinode-825335 status: &{Name:multinode-825335 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0201 09:29:25.186854 1057495 status.go:255] checking status of multinode-825335-m02 ...
	I0201 09:29:25.187093 1057495 cli_runner.go:164] Run: docker container inspect multinode-825335-m02 --format={{.State.Status}}
	I0201 09:29:25.203631 1057495 status.go:330] multinode-825335-m02 host status = "Running" (err=<nil>)
	I0201 09:29:25.203661 1057495 host.go:66] Checking if "multinode-825335-m02" exists ...
	I0201 09:29:25.203946 1057495 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-825335-m02
	I0201 09:29:25.219560 1057495 host.go:66] Checking if "multinode-825335-m02" exists ...
	I0201 09:29:25.219860 1057495 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0201 09:29:25.219905 1057495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-825335-m02
	I0201 09:29:25.238687 1057495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34111 SSHKeyPath:/home/jenkins/minikube-integration/18051-952908/.minikube/machines/multinode-825335-m02/id_rsa Username:docker}
	I0201 09:29:25.331500 1057495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0201 09:29:25.342494 1057495 status.go:257] multinode-825335-m02 status: &{Name:multinode-825335-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0201 09:29:25.342536 1057495 status.go:255] checking status of multinode-825335-m03 ...
	I0201 09:29:25.342865 1057495 cli_runner.go:164] Run: docker container inspect multinode-825335-m03 --format={{.State.Status}}
	I0201 09:29:25.359509 1057495 status.go:330] multinode-825335-m03 host status = "Stopped" (err=<nil>)
	I0201 09:29:25.359555 1057495 status.go:343] host is not running, skipping remaining checks
	I0201 09:29:25.359562 1057495 status.go:257] multinode-825335-m03 status: &{Name:multinode-825335-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-825335 node start m03 --alsologtostderr: (11.11964603s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 status
E0201 09:29:36.958182  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-825335
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-825335
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-825335: (24.777384738s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-825335 --wait=true -v=8 --alsologtostderr
E0201 09:30:04.643224  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
E0201 09:31:03.362766  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-825335 --wait=true -v=8 --alsologtostderr: (1m28.1317306s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-825335
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-825335 node delete m03: (4.15563597s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-825335 stop: (23.558290502s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-825335 status: exit status 7 (108.043153ms)

                                                
                                                
-- stdout --
	multinode-825335
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-825335-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-825335 status --alsologtostderr: exit status 7 (99.164085ms)

                                                
                                                
-- stdout --
	multinode-825335
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-825335-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:31:58.756881 1067642 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:31:58.757158 1067642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:31:58.757168 1067642 out.go:309] Setting ErrFile to fd 2...
	I0201 09:31:58.757172 1067642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:31:58.757378 1067642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:31:58.757546 1067642 out.go:303] Setting JSON to false
	I0201 09:31:58.757584 1067642 mustload.go:65] Loading cluster: multinode-825335
	I0201 09:31:58.757706 1067642 notify.go:220] Checking for updates...
	I0201 09:31:58.758019 1067642 config.go:182] Loaded profile config "multinode-825335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:31:58.758035 1067642 status.go:255] checking status of multinode-825335 ...
	I0201 09:31:58.758493 1067642 cli_runner.go:164] Run: docker container inspect multinode-825335 --format={{.State.Status}}
	I0201 09:31:58.776444 1067642 status.go:330] multinode-825335 host status = "Stopped" (err=<nil>)
	I0201 09:31:58.776467 1067642 status.go:343] host is not running, skipping remaining checks
	I0201 09:31:58.776473 1067642 status.go:257] multinode-825335 status: &{Name:multinode-825335 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0201 09:31:58.776503 1067642 status.go:255] checking status of multinode-825335-m02 ...
	I0201 09:31:58.776772 1067642 cli_runner.go:164] Run: docker container inspect multinode-825335-m02 --format={{.State.Status}}
	I0201 09:31:58.793718 1067642 status.go:330] multinode-825335-m02 host status = "Stopped" (err=<nil>)
	I0201 09:31:58.793745 1067642 status.go:343] host is not running, skipping remaining checks
	I0201 09:31:58.793751 1067642 status.go:257] multinode-825335-m02 status: &{Name:multinode-825335-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (74.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-825335 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0201 09:32:00.360286  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:32:26.407365  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-825335 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.58646273s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825335 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (74.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-825335
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-825335-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-825335-m02 --driver=docker  --container-runtime=crio: exit status 14 (83.279504ms)

                                                
                                                
-- stdout --
	* [multinode-825335-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-825335-m02' is duplicated with machine name 'multinode-825335-m02' in profile 'multinode-825335'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-825335-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-825335-m03 --driver=docker  --container-runtime=crio: (21.761607514s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-825335
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-825335: exit status 80 (304.358298ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-825335
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-825335-m03 already exists in multinode-825335-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-825335-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-825335-m03: (1.898345116s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.11s)

                                                
                                    
x
+
TestPreload (146.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-142500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0201 09:34:36.958758  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-142500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m23.408803339s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-142500 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-142500 image pull gcr.io/k8s-minikube/busybox: (2.468969438s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-142500
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-142500: (5.720199055s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-142500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0201 09:36:03.362341  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-142500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (52.800753288s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-142500 image list
helpers_test.go:175: Cleaning up "test-preload-142500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-142500
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-142500: (2.3470425s)
--- PASS: TestPreload (146.98s)

                                                
                                    
x
+
TestScheduledStopUnix (100.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-838740 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-838740 --memory=2048 --driver=docker  --container-runtime=crio: (24.643666284s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-838740 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-838740 -n scheduled-stop-838740
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-838740 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-838740 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-838740 -n scheduled-stop-838740
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-838740
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-838740 --schedule 15s
E0201 09:37:00.359724  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-838740
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-838740: exit status 7 (82.569378ms)

                                                
                                                
-- stdout --
	scheduled-stop-838740
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-838740 -n scheduled-stop-838740
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-838740 -n scheduled-stop-838740: exit status 7 (83.130938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-838740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-838740
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-838740: (3.892097415s)
--- PASS: TestScheduledStopUnix (100.10s)

                                                
                                    
x
+
TestInsufficientStorage (10.67s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-042957 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-042957 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.215777838s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2975269d-9299-4d9e-bddf-a8580980e444","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-042957] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a89b4c0-be67-46c5-9c09-9bd9ddc828f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18051"}}
	{"specversion":"1.0","id":"2ce71d56-7a05-44c8-a30c-19a92d1cbd29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"85dda4f7-abe9-4245-bb77-ab249a2cc1ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig"}}
	{"specversion":"1.0","id":"9459b367-8baa-4055-aa37-9f42b25096d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube"}}
	{"specversion":"1.0","id":"53c1c75a-9658-4b62-bb0d-dd7747759b65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2ccb6de4-50ce-4feb-9635-8268ee37c1f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b3d108c6-17e0-42f5-894d-c3ccc8bebf87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6e33c7b6-f12c-4642-afb1-9bb24a520198","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8fe3ff3c-86a3-40f8-a9a3-56883230b1f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"23ed207c-d4a1-4afe-b991-788ba6ff5b63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"28f4e969-baa2-4d90-9ca1-79450c531b4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-042957 in cluster insufficient-storage-042957","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2102d804-f33f-4837-b339-1912f2575918","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7475944a-90c0-4815-a94d-499b9c0c9ffc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d0da224-589e-4f48-a68f-5736fbd1f192","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-042957 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-042957 --output=json --layout=cluster: exit status 7 (277.402702ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-042957","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-042957","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0201 09:37:56.797805 1088525 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-042957" does not appear in /home/jenkins/minikube-integration/18051-952908/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-042957 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-042957 --output=json --layout=cluster: exit status 7 (293.946053ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-042957","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-042957","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0201 09:37:57.091601 1088611 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-042957" does not appear in /home/jenkins/minikube-integration/18051-952908/kubeconfig
	E0201 09:37:57.101752 1088611 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/insufficient-storage-042957/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-042957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-042957
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-042957: (1.883838225s)
--- PASS: TestInsufficientStorage (10.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (95.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4197029488 start -p running-upgrade-971640 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4197029488 start -p running-upgrade-971640 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m2.552560129s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-971640 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-971640 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.392590831s)
helpers_test.go:175: Cleaning up "running-upgrade-971640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-971640
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-971640: (8.75554477s)
--- PASS: TestRunningBinaryUpgrade (95.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (352.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-641347 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-641347 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.61268318s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-641347
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-641347: (1.248036876s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-641347 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-641347 status --format={{.Host}}: exit status 7 (120.177786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-641347 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-641347 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m33.551034023s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-641347 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-641347 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-641347 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (100.24961ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-641347] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-641347
	    minikube start -p kubernetes-upgrade-641347 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6413472 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-641347 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-641347 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-641347 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.732995779s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-641347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-641347
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-641347: (4.175831612s)
--- PASS: TestKubernetesUpgrade (352.61s)

                                                
                                    
x
+
TestMissingContainerUpgrade (171.87s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1536513956 start -p missing-upgrade-284676 --memory=2200 --driver=docker  --container-runtime=crio
E0201 09:38:23.405012  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1536513956 start -p missing-upgrade-284676 --memory=2200 --driver=docker  --container-runtime=crio: (1m41.04095403s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-284676
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-284676: (11.453295549s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-284676
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-284676 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-284676 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.937741182s)
helpers_test.go:175: Cleaning up "missing-upgrade-284676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-284676
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-284676: (2.0220796s)
--- PASS: TestMissingContainerUpgrade (171.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-175676 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-175676 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (98.52378ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-175676] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-175676 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-175676 --driver=docker  --container-runtime=crio: (37.531312051s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-175676 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-175676 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-175676 --no-kubernetes --driver=docker  --container-runtime=crio: (10.17662512s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-175676 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-175676 status -o json: exit status 2 (318.326562ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-175676","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-175676
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-175676: (1.970914678s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-175676 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-175676 --no-kubernetes --driver=docker  --container-runtime=crio: (8.132733973s)
--- PASS: TestNoKubernetes/serial/Start (8.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-175676 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-175676 "sudo systemctl is-active --quiet service kubelet": exit status 1 (318.055634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-175676
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-175676: (1.209650693s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (10.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-175676 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-175676 --driver=docker  --container-runtime=crio: (10.138928658s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (10.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-175676 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-175676 "sudo systemctl is-active --quiet service kubelet": exit status 1 (376.43184ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (58.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2473965873 start -p stopped-upgrade-598537 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0201 09:39:36.958940  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2473965873 start -p stopped-upgrade-598537 --memory=2200 --vm-driver=docker  --container-runtime=crio: (28.9152907s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2473965873 -p stopped-upgrade-598537 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2473965873 -p stopped-upgrade-598537 stop: (2.242678105s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-598537 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-598537 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.203676209s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (58.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-598537
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-598537: (1.890703335s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.89s)

                                                
                                    
x
+
TestPause/serial/Start (56.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-746847 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-746847 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (56.837239553s)
--- PASS: TestPause/serial/Start (56.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-746847 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-746847 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.197783388s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-969266 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-969266 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (180.705615ms)

                                                
                                                
-- stdout --
	* [false-969266] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0201 09:41:23.172322 1133338 out.go:296] Setting OutFile to fd 1 ...
	I0201 09:41:23.172523 1133338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:41:23.172537 1133338 out.go:309] Setting ErrFile to fd 2...
	I0201 09:41:23.172545 1133338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0201 09:41:23.172806 1133338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-952908/.minikube/bin
	I0201 09:41:23.173586 1133338 out.go:303] Setting JSON to false
	I0201 09:41:23.174923 1133338 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":59030,"bootTime":1706721453,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0201 09:41:23.175014 1133338 start.go:138] virtualization: kvm guest
	I0201 09:41:23.177496 1133338 out.go:177] * [false-969266] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0201 09:41:23.179156 1133338 out.go:177]   - MINIKUBE_LOCATION=18051
	I0201 09:41:23.179207 1133338 notify.go:220] Checking for updates...
	I0201 09:41:23.180595 1133338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0201 09:41:23.182006 1133338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-952908/kubeconfig
	I0201 09:41:23.183463 1133338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-952908/.minikube
	I0201 09:41:23.188636 1133338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0201 09:41:23.190150 1133338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0201 09:41:23.192405 1133338 config.go:182] Loaded profile config "cert-expiration-446910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:41:23.192579 1133338 config.go:182] Loaded profile config "kubernetes-upgrade-641347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0201 09:41:23.192719 1133338 config.go:182] Loaded profile config "pause-746847": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0201 09:41:23.192830 1133338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0201 09:41:23.217419 1133338 docker.go:122] docker version: linux-25.0.2:Docker Engine - Community
	I0201 09:41:23.217557 1133338 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0201 09:41:23.278216 1133338 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:78 SystemTime:2024-02-01 09:41:23.267808216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0201 09:41:23.278305 1133338 docker.go:295] overlay module found
	I0201 09:41:23.280978 1133338 out.go:177] * Using the docker driver based on user configuration
	I0201 09:41:23.282504 1133338 start.go:298] selected driver: docker
	I0201 09:41:23.282521 1133338 start.go:902] validating driver "docker" against <nil>
	I0201 09:41:23.282534 1133338 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0201 09:41:23.284819 1133338 out.go:177] 
	W0201 09:41:23.286326 1133338 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0201 09:41:23.287811 1133338 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-969266 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-969266" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:40:39 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-446910
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:39:58 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-641347
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:41:02 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-746847
contexts:
- context:
cluster: cert-expiration-446910
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:40:39 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-446910
name: cert-expiration-446910
- context:
cluster: kubernetes-upgrade-641347
user: kubernetes-upgrade-641347
name: kubernetes-upgrade-641347
- context:
cluster: pause-746847
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:41:02 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-746847
name: pause-746847
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-446910
user:
client-certificate: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/cert-expiration-446910/client.crt
client-key: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/cert-expiration-446910/client.key
- name: kubernetes-upgrade-641347
user:
client-certificate: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/kubernetes-upgrade-641347/client.crt
client-key: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/kubernetes-upgrade-641347/client.key
- name: pause-746847
user:
client-certificate: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/pause-746847/client.crt
client-key: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/pause-746847/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-969266

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969266"

                                                
                                                
----------------------- debugLogs end: false-969266 [took: 3.51754869s] --------------------------------
helpers_test.go:175: Cleaning up "false-969266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-969266
--- PASS: TestNetworkPlugins/group/false (3.87s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-746847 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-746847 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-746847 --output=json --layout=cluster: exit status 2 (335.563772ms)

                                                
                                                
-- stdout --
	{"Name":"pause-746847","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-746847","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-746847 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-746847 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-746847 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-746847 --alsologtostderr -v=5: (2.752445925s)
--- PASS: TestPause/serial/DeletePaused (2.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (19.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (19.14989486s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-746847
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-746847: exit status 1 (16.742183ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-746847: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (19.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (114.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-296392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-296392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m54.742652008s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (114.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-095603 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-095603 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m9.377566394s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-095603 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4669b222-f97d-46a8-bbc6-3d108c31adff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4669b222-f97d-46a8-bbc6-3d108c31adff] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004547232s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-095603 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-095603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-095603 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-095603 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-095603 --alsologtostderr -v=3: (11.877399054s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095603 -n no-preload-095603
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095603 -n no-preload-095603: exit status 7 (97.11212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-095603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (341.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-095603 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-095603 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m41.338846977s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095603 -n no-preload-095603
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (341.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-296392 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b5f0391c-75f7-4c71-8b7c-bf24545c3b96] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b5f0391c-75f7-4c71-8b7c-bf24545c3b96] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003802752s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-296392 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-740181 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-740181 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (46.008398669s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-296392 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-296392 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-296392 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-296392 --alsologtostderr -v=3: (12.026558619s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-296392 -n old-k8s-version-296392
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-296392 -n old-k8s-version-296392: exit status 7 (119.216887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-296392 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (436.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-296392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0201 09:44:36.958501  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-296392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m15.938711681s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-296392 -n old-k8s-version-296392
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (436.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-740181 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5a6aec45-83e7-4b75-ab9c-d7f192754f14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5a6aec45-83e7-4b75-ab9c-d7f192754f14] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003545913s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-740181 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-114213 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-114213 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (44.482410483s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-740181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-740181 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-740181 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-740181 --alsologtostderr -v=3: (11.97855027s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-740181 -n embed-certs-740181
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-740181 -n embed-certs-740181: exit status 7 (102.826769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-740181 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (337.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-740181 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-740181 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m36.743791932s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-740181 -n embed-certs-740181
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (337.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-114213 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f54c1cc6-636c-4e33-80bf-7ea9aae56f62] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f54c1cc6-636c-4e33-80bf-7ea9aae56f62] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003866903s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-114213 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-114213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-114213 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-114213 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-114213 --alsologtostderr -v=3: (11.880850692s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-114213 -n default-k8s-diff-port-114213
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-114213 -n default-k8s-diff-port-114213: exit status 7 (88.303845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-114213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-114213 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0201 09:46:03.362900  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
E0201 09:47:00.359939  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/functional-571055/client.crt: no such file or directory
E0201 09:49:06.407654  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-114213 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m43.78619404s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-114213 -n default-k8s-diff-port-114213
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zxx9t" [871a1903-8d02-4c9d-b54e-6e50bd30ba2e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zxx9t" [871a1903-8d02-4c9d-b54e-6e50bd30ba2e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003922967s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zxx9t" [871a1903-8d02-4c9d-b54e-6e50bd30ba2e] Running
E0201 09:49:36.958936  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/ingress-addon-legacy-518837/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003937909s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-095603 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-095603 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-095603 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095603 -n no-preload-095603
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095603 -n no-preload-095603: exit status 2 (324.323645ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-095603 -n no-preload-095603
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-095603 -n no-preload-095603: exit status 2 (317.19013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-095603 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095603 -n no-preload-095603
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-095603 -n no-preload-095603
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-192618 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-192618 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (39.850008994s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-192618 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-192618 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.200677885s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-192618 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-192618 --alsologtostderr -v=3: (1.279372489s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-192618 -n newest-cni-192618
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-192618 -n newest-cni-192618: exit status 7 (127.71547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-192618 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-192618 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-192618 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (26.670461016s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-192618 -n newest-cni-192618
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dcpls" [82905713-a5ea-4c3a-b1bf-b15b7c882cc3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dcpls" [82905713-a5ea-4c3a-b1bf-b15b7c882cc3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005832823s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-192618 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-192618 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-192618 -n newest-cni-192618
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-192618 -n newest-cni-192618: exit status 2 (325.168945ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-192618 -n newest-cni-192618
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-192618 -n newest-cni-192618: exit status 2 (321.664485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-192618 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-192618 -n newest-cni-192618
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-192618 -n newest-cni-192618
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dcpls" [82905713-a5ea-4c3a-b1bf-b15b7c882cc3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00504419s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-740181 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0201 09:51:03.363058  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/addons-642352/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (46.187672968s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-740181 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-740181 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-740181 -n embed-certs-740181
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-740181 -n embed-certs-740181: exit status 2 (353.112738ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-740181 -n embed-certs-740181
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-740181 -n embed-certs-740181: exit status 2 (343.834155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-740181 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-740181 -n embed-certs-740181
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-740181 -n embed-certs-740181
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (38.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (38.669355529s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (38.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k2p64" [a898a714-e772-4351-85ed-6a6f9b24a39a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k2p64" [a898a714-e772-4351-85ed-6a6f9b24a39a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004039751s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-prvzd" [49229250-27e9-4aad-b426-b73fde77e3be] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003877481s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-prvzd" [49229250-27e9-4aad-b426-b73fde77e3be] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003926171s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-296392 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k2p64" [a898a714-e772-4351-85ed-6a6f9b24a39a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00479055s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-114213 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-296392 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-296392 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-296392 -n old-k8s-version-296392
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-296392 -n old-k8s-version-296392: exit status 2 (344.077728ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-296392 -n old-k8s-version-296392
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-296392 -n old-k8s-version-296392: exit status 2 (346.927205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-296392 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-296392 -n old-k8s-version-296392
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-296392 -n old-k8s-version-296392
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-969266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-969266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-79lks" [82f998f0-18f1-4adb-a588-813ccce68203] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-79lks" [82f998f0-18f1-4adb-a588-813ccce68203] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00434309s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zqwqq" [49de7aee-8e1a-4f4a-ae98-f410dbc8595e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005053523s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-114213 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-114213 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-114213 -n default-k8s-diff-port-114213
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-114213 -n default-k8s-diff-port-114213: exit status 2 (387.329782ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-114213 -n default-k8s-diff-port-114213
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-114213 -n default-k8s-diff-port-114213: exit status 2 (344.914519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-114213 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-114213 -n default-k8s-diff-port-114213
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-114213 -n default-k8s-diff-port-114213
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)
E0201 09:53:41.459678  959740 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/no-preload-095603/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.099987456s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-969266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-969266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kvkft" [672414be-caeb-44db-a0c9-acf815f9fce3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kvkft" [672414be-caeb-44db-a0c9-acf815f9fce3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004896423s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-969266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.836952339s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-969266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (46.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (46.092539806s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (46.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m2.820682472s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bdl7k" [d6ca6a93-9a36-48bd-9ee5-971ae727a0dd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006105581s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-969266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-969266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zcm2t" [93e7bd84-81b3-4a95-8057-116e174272cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zcm2t" [93e7bd84-81b3-4a95-8057-116e174272cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003787783s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-969266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-969266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mp8nx" [06ef2fca-e625-4202-8402-729888238d01] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mp8nx" [06ef2fca-e625-4202-8402-729888238d01] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004383372s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-969266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-969266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2lb5n" [db8f1cc7-d36c-4033-b6ef-dd81e02d257d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2lb5n" [db8f1cc7-d36c-4033-b6ef-dd81e02d257d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004747904s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-969266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-969266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-969266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-n8jkg" [19c227de-4f55-407d-8276-c523703257c4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.13804449s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-969266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (47.601769572s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-969266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-969266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4sjds" [3611d2e7-b281-4848-abcd-91aa88c59ce6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4sjds" [3611d2e7-b281-4848-abcd-91aa88c59ce6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004108038s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-969266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-969266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-969266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-46gnt" [6f158214-c764-4686-bc51-4be67e6ea9a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-46gnt" [6f158214-c764-4686-bc51-4be67e6ea9a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004082563s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-969266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-969266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (27/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-303726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-303726
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-969266 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-969266" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:40:39 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-446910
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:39:58 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-641347
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:41:02 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-746847
contexts:
- context:
cluster: cert-expiration-446910
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:40:39 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-446910
name: cert-expiration-446910
- context:
cluster: kubernetes-upgrade-641347
user: kubernetes-upgrade-641347
name: kubernetes-upgrade-641347
- context:
cluster: pause-746847
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:41:02 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-746847
name: pause-746847
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-446910
user:
client-certificate: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/cert-expiration-446910/client.crt
client-key: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/cert-expiration-446910/client.key
- name: kubernetes-upgrade-641347
user:
client-certificate: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/kubernetes-upgrade-641347/client.crt
client-key: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/kubernetes-upgrade-641347/client.key
- name: pause-746847
user:
client-certificate: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/pause-746847/client.crt
client-key: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/pause-746847/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-969266

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969266"

                                                
                                                
----------------------- debugLogs end: kubenet-969266 [took: 3.494185507s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-969266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-969266
--- SKIP: TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-969266 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-969266" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:40:39 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-446910
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:39:58 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-641347
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18051-952908/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:41:02 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-746847
contexts:
- context:
cluster: cert-expiration-446910
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:40:39 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-446910
name: cert-expiration-446910
- context:
cluster: kubernetes-upgrade-641347
user: kubernetes-upgrade-641347
name: kubernetes-upgrade-641347
- context:
cluster: pause-746847
extensions:
- extension:
last-update: Thu, 01 Feb 2024 09:41:02 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-746847
name: pause-746847
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-446910
user:
client-certificate: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/cert-expiration-446910/client.crt
client-key: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/cert-expiration-446910/client.key
- name: kubernetes-upgrade-641347
user:
client-certificate: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/kubernetes-upgrade-641347/client.crt
client-key: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/kubernetes-upgrade-641347/client.key
- name: pause-746847
user:
client-certificate: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/pause-746847/client.crt
client-key: /home/jenkins/minikube-integration/18051-952908/.minikube/profiles/pause-746847/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-969266

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-969266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969266"

                                                
                                                
----------------------- debugLogs end: cilium-969266 [took: 4.088286123s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-969266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-969266
--- SKIP: TestNetworkPlugins/group/cilium (4.28s)

                                                
                                    
Copied to clipboard