Test Report: Docker_Linux_crio 19740

                    
                      f4f6e0076e771cedcca340e072cd1813dc91a89c:2024-10-01:36461
                    
                

Test fail (2/327)

Order failed test Duration
34 TestAddons/parallel/Ingress 150.01
36 TestAddons/parallel/MetricsServer 331.44
x
+
TestAddons/parallel/Ingress (150.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-003557 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-003557 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-003557 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a94eed6c-a7fa-455c-8b57-f7c876a2b3a0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a94eed6c-a7fa-455c-8b57-f7c876a2b3a0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004032302s
I1001 22:59:30.953262   16095 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-003557 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.888132973s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-003557 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-003557
helpers_test.go:235: (dbg) docker inspect addons-003557:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff",
	        "Created": "2024-10-01T22:47:46.000812598Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18163,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-01T22:47:46.140159634Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1e9ad061035bd5b30872a757d87ebe8d5dc61829c56d176a3bb4ef156d71dbc8",
	        "ResolvConfPath": "/var/lib/docker/containers/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff/hostname",
	        "HostsPath": "/var/lib/docker/containers/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff/hosts",
	        "LogPath": "/var/lib/docker/containers/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff-json.log",
	        "Name": "/addons-003557",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-003557:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-003557",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fc1a80bd84cd48e1be9101afc74670c23d38996937a2b16194a3917e5b7da15c-init/diff:/var/lib/docker/overlay2/b9404fff46f8e735d2bf051ec5059d82dbc01f063c2a94263bbafaa62c37fadc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc1a80bd84cd48e1be9101afc74670c23d38996937a2b16194a3917e5b7da15c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc1a80bd84cd48e1be9101afc74670c23d38996937a2b16194a3917e5b7da15c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc1a80bd84cd48e1be9101afc74670c23d38996937a2b16194a3917e5b7da15c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-003557",
	                "Source": "/var/lib/docker/volumes/addons-003557/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-003557",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-003557",
	                "name.minikube.sigs.k8s.io": "addons-003557",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "307d7cdc38ea628e0dfcba85c98e607c34fa01f82b4bbeb52716621b4276720c",
	            "SandboxKey": "/var/run/docker/netns/307d7cdc38ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-003557": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b349c7b1c5228352689da8597b81b9e506a3bcef928ffcaf2f324cfe2c11add3",
	                    "EndpointID": "eb8ad19fb69207ef817348b3b7c0e210303b25e993f2a45501a50dcf7e4a2c23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-003557",
	                        "e707a4e961c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-003557 -n addons-003557
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-003557 logs -n 25: (1.122414162s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-195979                                                                     | download-only-195979   | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| delete  | -p download-only-179949                                                                     | download-only-179949   | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| start   | --download-only -p                                                                          | download-docker-848534 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | download-docker-848534                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-848534                                                                   | download-docker-848534 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-560533   | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | binary-mirror-560533                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36859                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-560533                                                                     | binary-mirror-560533   | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| addons  | enable dashboard -p                                                                         | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | addons-003557                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | addons-003557                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-003557 --wait=true                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:50 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:50 UTC | 01 Oct 24 22:50 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | -p addons-003557                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-003557 ip                                                                            | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:59 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | -p addons-003557                                                                            |                        |         |         |                     |                     |
	| addons  | addons-003557 addons                                                                        | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-003557 ssh cat                                                                       | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | /opt/local-path-provisioner/pvc-0e7d7921-5349-40a6-8079-5946d984cc77_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-003557 addons                                                                        | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-003557 addons                                                                        | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-003557 addons                                                                        | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-003557 ssh curl -s                                                                   | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-003557 ip                                                                            | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 23:01 UTC | 01 Oct 24 23:01 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 22:47:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 22:47:21.965658   17406 out.go:345] Setting OutFile to fd 1 ...
	I1001 22:47:21.965789   17406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:21.965798   17406 out.go:358] Setting ErrFile to fd 2...
	I1001 22:47:21.965804   17406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:21.965996   17406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	I1001 22:47:21.966616   17406 out.go:352] Setting JSON to false
	I1001 22:47:21.967442   17406 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1789,"bootTime":1727821053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 22:47:21.967541   17406 start.go:139] virtualization: kvm guest
	I1001 22:47:21.969853   17406 out.go:177] * [addons-003557] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 22:47:21.971350   17406 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 22:47:21.971352   17406 notify.go:220] Checking for updates...
	I1001 22:47:21.973883   17406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 22:47:21.975207   17406 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	I1001 22:47:21.976372   17406 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	I1001 22:47:21.977578   17406 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 22:47:21.978933   17406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 22:47:21.980553   17406 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 22:47:22.002383   17406 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 22:47:22.002490   17406 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 22:47:22.046510   17406 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-01 22:47:22.03757525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 22:47:22.046629   17406 docker.go:318] overlay module found
	I1001 22:47:22.049336   17406 out.go:177] * Using the docker driver based on user configuration
	I1001 22:47:22.050531   17406 start.go:297] selected driver: docker
	I1001 22:47:22.050551   17406 start.go:901] validating driver "docker" against <nil>
	I1001 22:47:22.050566   17406 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 22:47:22.051343   17406 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 22:47:22.096445   17406 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-01 22:47:22.086770495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 22:47:22.096619   17406 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 22:47:22.096879   17406 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 22:47:22.098952   17406 out.go:177] * Using Docker driver with root privileges
	I1001 22:47:22.100092   17406 cni.go:84] Creating CNI manager for ""
	I1001 22:47:22.100164   17406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 22:47:22.100180   17406 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 22:47:22.100269   17406 start.go:340] cluster config:
	{Name:addons-003557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-003557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:22.101480   17406 out.go:177] * Starting "addons-003557" primary control-plane node in "addons-003557" cluster
	I1001 22:47:22.102565   17406 cache.go:121] Beginning downloading kic base image for docker with crio
	I1001 22:47:22.103583   17406 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1001 22:47:22.104776   17406 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:22.104803   17406 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 22:47:22.104812   17406 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:22.104820   17406 cache.go:56] Caching tarball of preloaded images
	I1001 22:47:22.104891   17406 preload.go:172] Found /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 22:47:22.104902   17406 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 22:47:22.105207   17406 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/config.json ...
	I1001 22:47:22.105229   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/config.json: {Name:mk130bfc3a5e480d2dbe9dd1c51226ea03a7c34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:22.121824   17406 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 22:47:22.121946   17406 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 22:47:22.121965   17406 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1001 22:47:22.121972   17406 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1001 22:47:22.121984   17406 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1001 22:47:22.121992   17406 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1001 22:47:33.736785   17406 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1001 22:47:33.736832   17406 cache.go:194] Successfully downloaded all kic artifacts
	I1001 22:47:33.736876   17406 start.go:360] acquireMachinesLock for addons-003557: {Name:mkb213c143cb031a9d9505d7f03929c80936d14e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 22:47:33.736984   17406 start.go:364] duration metric: took 87.033µs to acquireMachinesLock for "addons-003557"
	I1001 22:47:33.737012   17406 start.go:93] Provisioning new machine with config: &{Name:addons-003557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-003557 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 22:47:33.737120   17406 start.go:125] createHost starting for "" (driver="docker")
	I1001 22:47:33.739047   17406 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1001 22:47:33.739300   17406 start.go:159] libmachine.API.Create for "addons-003557" (driver="docker")
	I1001 22:47:33.739341   17406 client.go:168] LocalClient.Create starting
	I1001 22:47:33.739444   17406 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem
	I1001 22:47:33.893440   17406 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/cert.pem
	I1001 22:47:34.236814   17406 cli_runner.go:164] Run: docker network inspect addons-003557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1001 22:47:34.251923   17406 cli_runner.go:211] docker network inspect addons-003557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1001 22:47:34.252014   17406 network_create.go:284] running [docker network inspect addons-003557] to gather additional debugging logs...
	I1001 22:47:34.252041   17406 cli_runner.go:164] Run: docker network inspect addons-003557
	W1001 22:47:34.268031   17406 cli_runner.go:211] docker network inspect addons-003557 returned with exit code 1
	I1001 22:47:34.268060   17406 network_create.go:287] error running [docker network inspect addons-003557]: docker network inspect addons-003557: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-003557 not found
	I1001 22:47:34.268070   17406 network_create.go:289] output of [docker network inspect addons-003557]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-003557 not found
	
	** /stderr **
	I1001 22:47:34.268155   17406 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 22:47:34.283445   17406 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001970770}
	I1001 22:47:34.283488   17406 network_create.go:124] attempt to create docker network addons-003557 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1001 22:47:34.283530   17406 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-003557 addons-003557
	I1001 22:47:34.343380   17406 network_create.go:108] docker network addons-003557 192.168.49.0/24 created
	I1001 22:47:34.343410   17406 kic.go:121] calculated static IP "192.168.49.2" for the "addons-003557" container
	I1001 22:47:34.343480   17406 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1001 22:47:34.358525   17406 cli_runner.go:164] Run: docker volume create addons-003557 --label name.minikube.sigs.k8s.io=addons-003557 --label created_by.minikube.sigs.k8s.io=true
	I1001 22:47:34.376414   17406 oci.go:103] Successfully created a docker volume addons-003557
	I1001 22:47:34.376483   17406 cli_runner.go:164] Run: docker run --rm --name addons-003557-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-003557 --entrypoint /usr/bin/test -v addons-003557:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1001 22:47:41.574653   17406 cli_runner.go:217] Completed: docker run --rm --name addons-003557-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-003557 --entrypoint /usr/bin/test -v addons-003557:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (7.198132962s)
	I1001 22:47:41.574679   17406 oci.go:107] Successfully prepared a docker volume addons-003557
	I1001 22:47:41.574693   17406 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:41.574709   17406 kic.go:194] Starting extracting preloaded images to volume ...
	I1001 22:47:41.574751   17406 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-003557:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1001 22:47:45.939322   17406 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-003557:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.364532873s)
	I1001 22:47:45.939351   17406 kic.go:203] duration metric: took 4.364639329s to extract preloaded images to volume ...
	W1001 22:47:45.939467   17406 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1001 22:47:45.939559   17406 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1001 22:47:45.984733   17406 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-003557 --name addons-003557 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-003557 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-003557 --network addons-003557 --ip 192.168.49.2 --volume addons-003557:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1001 22:47:46.294775   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Running}}
	I1001 22:47:46.313253   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:47:46.331610   17406 cli_runner.go:164] Run: docker exec addons-003557 stat /var/lib/dpkg/alternatives/iptables
	I1001 22:47:46.374431   17406 oci.go:144] the created container "addons-003557" has a running status.
	I1001 22:47:46.374458   17406 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa...
	I1001 22:47:46.595347   17406 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1001 22:47:46.617714   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:47:46.637480   17406 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1001 22:47:46.637505   17406 kic_runner.go:114] Args: [docker exec --privileged addons-003557 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1001 22:47:46.742213   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:47:46.761331   17406 machine.go:93] provisionDockerMachine start ...
	I1001 22:47:46.761403   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:46.781751   17406 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.781937   17406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1001 22:47:46.781948   17406 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 22:47:46.919607   17406 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-003557
	
	I1001 22:47:46.919636   17406 ubuntu.go:169] provisioning hostname "addons-003557"
	I1001 22:47:46.919694   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:46.937167   17406 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.937371   17406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1001 22:47:46.937387   17406 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-003557 && echo "addons-003557" | sudo tee /etc/hostname
	I1001 22:47:47.075264   17406 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-003557
	
	I1001 22:47:47.075379   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.092511   17406 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:47.092712   17406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1001 22:47:47.092730   17406 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-003557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-003557/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-003557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 22:47:47.216477   17406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 22:47:47.216508   17406 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9314/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9314/.minikube}
	I1001 22:47:47.216562   17406 ubuntu.go:177] setting up certificates
	I1001 22:47:47.216574   17406 provision.go:84] configureAuth start
	I1001 22:47:47.216669   17406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-003557
	I1001 22:47:47.233955   17406 provision.go:143] copyHostCerts
	I1001 22:47:47.234027   17406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9314/.minikube/ca.pem (1078 bytes)
	I1001 22:47:47.234135   17406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9314/.minikube/cert.pem (1123 bytes)
	I1001 22:47:47.234193   17406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9314/.minikube/key.pem (1675 bytes)
	I1001 22:47:47.235066   17406 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca-key.pem org=jenkins.addons-003557 san=[127.0.0.1 192.168.49.2 addons-003557 localhost minikube]
	I1001 22:47:47.320983   17406 provision.go:177] copyRemoteCerts
	I1001 22:47:47.321036   17406 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 22:47:47.321067   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.337815   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:47:47.428975   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 22:47:47.449509   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 22:47:47.469912   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 22:47:47.490262   17406 provision.go:87] duration metric: took 273.670781ms to configureAuth
	I1001 22:47:47.490292   17406 ubuntu.go:193] setting minikube options for container-runtime
	I1001 22:47:47.490483   17406 config.go:182] Loaded profile config "addons-003557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:47:47.490582   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.509165   17406 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:47.509353   17406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1001 22:47:47.509371   17406 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 22:47:47.720771   17406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 22:47:47.720797   17406 machine.go:96] duration metric: took 959.446625ms to provisionDockerMachine
	I1001 22:47:47.720807   17406 client.go:171] duration metric: took 13.981456631s to LocalClient.Create
	I1001 22:47:47.720822   17406 start.go:167] duration metric: took 13.98152478s to libmachine.API.Create "addons-003557"
	I1001 22:47:47.720830   17406 start.go:293] postStartSetup for "addons-003557" (driver="docker")
	I1001 22:47:47.720839   17406 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 22:47:47.720888   17406 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 22:47:47.720921   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.736824   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:47:47.829383   17406 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 22:47:47.832326   17406 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1001 22:47:47.832367   17406 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1001 22:47:47.832379   17406 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1001 22:47:47.832387   17406 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1001 22:47:47.832403   17406 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9314/.minikube/addons for local assets ...
	I1001 22:47:47.832473   17406 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9314/.minikube/files for local assets ...
	I1001 22:47:47.832507   17406 start.go:296] duration metric: took 111.669492ms for postStartSetup
	I1001 22:47:47.832851   17406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-003557
	I1001 22:47:47.848800   17406 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/config.json ...
	I1001 22:47:47.849037   17406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 22:47:47.849084   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.867196   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:47:47.961081   17406 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1001 22:47:47.964867   17406 start.go:128] duration metric: took 14.227732466s to createHost
	I1001 22:47:47.964894   17406 start.go:83] releasing machines lock for "addons-003557", held for 14.227898395s
	I1001 22:47:47.964961   17406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-003557
	I1001 22:47:47.981226   17406 ssh_runner.go:195] Run: cat /version.json
	I1001 22:47:47.981289   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.981351   17406 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 22:47:47.981415   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.999689   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:47:48.000381   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:47:48.167809   17406 ssh_runner.go:195] Run: systemctl --version
	I1001 22:47:48.171824   17406 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 22:47:48.307475   17406 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 22:47:48.311664   17406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 22:47:48.329096   17406 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1001 22:47:48.329173   17406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 22:47:48.355787   17406 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1001 22:47:48.355816   17406 start.go:495] detecting cgroup driver to use...
	I1001 22:47:48.355849   17406 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1001 22:47:48.355896   17406 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 22:47:48.369720   17406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 22:47:48.379814   17406 docker.go:217] disabling cri-docker service (if available) ...
	I1001 22:47:48.379866   17406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 22:47:48.392241   17406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 22:47:48.405050   17406 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 22:47:48.477750   17406 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 22:47:48.557758   17406 docker.go:233] disabling docker service ...
	I1001 22:47:48.557825   17406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 22:47:48.574378   17406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 22:47:48.584680   17406 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 22:47:48.656864   17406 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 22:47:48.734388   17406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 22:47:48.744547   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 22:47:48.758939   17406 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 22:47:48.758999   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.767902   17406 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 22:47:48.767969   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.777165   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.785934   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.794575   17406 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 22:47:48.802623   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.811134   17406 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.824799   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.833425   17406 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 22:47:48.840567   17406 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 22:47:48.840616   17406 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 22:47:48.853034   17406 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 22:47:48.860101   17406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:47:48.932938   17406 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 22:47:49.013101   17406 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 22:47:49.013171   17406 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 22:47:49.016248   17406 start.go:563] Will wait 60s for crictl version
	I1001 22:47:49.016296   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:47:49.019138   17406 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 22:47:49.050775   17406 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1001 22:47:49.050869   17406 ssh_runner.go:195] Run: crio --version
	I1001 22:47:49.085494   17406 ssh_runner.go:195] Run: crio --version
	I1001 22:47:49.119717   17406 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1001 22:47:49.121389   17406 cli_runner.go:164] Run: docker network inspect addons-003557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 22:47:49.137600   17406 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1001 22:47:49.140940   17406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 22:47:49.150714   17406 kubeadm.go:883] updating cluster {Name:addons-003557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-003557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 22:47:49.150811   17406 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:49.150849   17406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 22:47:49.209981   17406 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 22:47:49.210001   17406 crio.go:433] Images already preloaded, skipping extraction
	I1001 22:47:49.210038   17406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 22:47:49.241730   17406 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 22:47:49.241755   17406 cache_images.go:84] Images are preloaded, skipping loading
	I1001 22:47:49.241764   17406 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1001 22:47:49.241870   17406 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-003557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-003557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 22:47:49.241948   17406 ssh_runner.go:195] Run: crio config
	I1001 22:47:49.281545   17406 cni.go:84] Creating CNI manager for ""
	I1001 22:47:49.281566   17406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 22:47:49.281578   17406 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 22:47:49.281604   17406 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-003557 NodeName:addons-003557 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 22:47:49.281753   17406 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-003557"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 22:47:49.281822   17406 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 22:47:49.289969   17406 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 22:47:49.290024   17406 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 22:47:49.297824   17406 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1001 22:47:49.313832   17406 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 22:47:49.329934   17406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1001 22:47:49.345639   17406 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1001 22:47:49.348638   17406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 22:47:49.358552   17406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:47:49.433557   17406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 22:47:49.445080   17406 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557 for IP: 192.168.49.2
	I1001 22:47:49.445103   17406 certs.go:194] generating shared ca certs ...
	I1001 22:47:49.445119   17406 certs.go:226] acquiring lock for ca certs: {Name:mk7cb0f487f2a8d9c123ba652fec1471e60d3b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.445253   17406 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9314/.minikube/ca.key
	I1001 22:47:49.585758   17406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt ...
	I1001 22:47:49.585786   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt: {Name:mkc5719ab44495abc481f23183d7d9e421125e39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.585957   17406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9314/.minikube/ca.key ...
	I1001 22:47:49.585969   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/ca.key: {Name:mk23f3406b4f0ad789667e5d9fb6a7603bbf1ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.586037   17406 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.key
	I1001 22:47:49.658550   17406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.crt ...
	I1001 22:47:49.658574   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.crt: {Name:mk9cce3bf71d8e0978167d49f7c6f8e831fdefa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.658718   17406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.key ...
	I1001 22:47:49.658731   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.key: {Name:mkef79102ab9280df6f1a7a404a4398633d758f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.658798   17406 certs.go:256] generating profile certs ...
	I1001 22:47:49.658851   17406 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.key
	I1001 22:47:49.658861   17406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt with IP's: []
	I1001 22:47:49.793769   17406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt ...
	I1001 22:47:49.793796   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: {Name:mkeff56f49cc833d579e831a27db3aefd104c038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.793946   17406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.key ...
	I1001 22:47:49.793956   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.key: {Name:mk79d47f08c301fdf38bba01e9948b8a19b92e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.794022   17406 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key.3b7c86b9
	I1001 22:47:49.794039   17406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt.3b7c86b9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1001 22:47:49.944270   17406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt.3b7c86b9 ...
	I1001 22:47:49.944302   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt.3b7c86b9: {Name:mkc1f3e3c50dc41c25258fc4b110b78449125159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.944459   17406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key.3b7c86b9 ...
	I1001 22:47:49.944471   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key.3b7c86b9: {Name:mk8074a7d76731f3de2b18828eb73edee99f98ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.944539   17406 certs.go:381] copying /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt.3b7c86b9 -> /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt
	I1001 22:47:49.944609   17406 certs.go:385] copying /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key.3b7c86b9 -> /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key
	I1001 22:47:49.944675   17406 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.key
	I1001 22:47:49.944692   17406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.crt with IP's: []
	I1001 22:47:50.002575   17406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.crt ...
	I1001 22:47:50.002604   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.crt: {Name:mk7aa7796444cb7db480754240edadf71c6ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:50.002756   17406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.key ...
	I1001 22:47:50.002766   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.key: {Name:mkd5e8e1c201fcd0d55d1450e1ebf7eacb34e8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:50.002932   17406 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca-key.pem (1675 bytes)
	I1001 22:47:50.002966   17406 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem (1078 bytes)
	I1001 22:47:50.002986   17406 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/cert.pem (1123 bytes)
	I1001 22:47:50.003006   17406 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/key.pem (1675 bytes)
	I1001 22:47:50.003567   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 22:47:50.025057   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1001 22:47:50.045942   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 22:47:50.067774   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 22:47:50.089684   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 22:47:50.110309   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 22:47:50.130604   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 22:47:50.151444   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 22:47:50.171949   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 22:47:50.192683   17406 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 22:47:50.208856   17406 ssh_runner.go:195] Run: openssl version
	I1001 22:47:50.213819   17406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 22:47:50.221978   17406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:50.225107   17406 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:50.225163   17406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:50.230970   17406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 22:47:50.238894   17406 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 22:47:50.241855   17406 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 22:47:50.241895   17406 kubeadm.go:392] StartCluster: {Name:addons-003557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-003557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:50.242009   17406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 22:47:50.242051   17406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 22:47:50.279852   17406 cri.go:89] found id: ""
	I1001 22:47:50.279904   17406 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 22:47:50.288569   17406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 22:47:50.296553   17406 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1001 22:47:50.296603   17406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 22:47:50.304391   17406 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 22:47:50.304416   17406 kubeadm.go:157] found existing configuration files:
	
	I1001 22:47:50.304455   17406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 22:47:50.312308   17406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 22:47:50.312367   17406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 22:47:50.319871   17406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 22:47:50.327652   17406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 22:47:50.327702   17406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 22:47:50.335577   17406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 22:47:50.343158   17406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 22:47:50.343214   17406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 22:47:50.350509   17406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 22:47:50.357900   17406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 22:47:50.357958   17406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 22:47:50.365220   17406 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1001 22:47:50.396658   17406 kubeadm.go:310] W1001 22:47:50.395948    1294 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 22:47:50.397133   17406 kubeadm.go:310] W1001 22:47:50.396605    1294 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 22:47:50.414851   17406 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I1001 22:47:50.464349   17406 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 22:47:59.347626   17406 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 22:47:59.347682   17406 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 22:47:59.347753   17406 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1001 22:47:59.347804   17406 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I1001 22:47:59.347835   17406 kubeadm.go:310] OS: Linux
	I1001 22:47:59.347884   17406 kubeadm.go:310] CGROUPS_CPU: enabled
	I1001 22:47:59.347927   17406 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1001 22:47:59.347977   17406 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1001 22:47:59.348019   17406 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1001 22:47:59.348098   17406 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1001 22:47:59.348182   17406 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1001 22:47:59.348256   17406 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1001 22:47:59.348325   17406 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1001 22:47:59.348398   17406 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1001 22:47:59.348486   17406 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 22:47:59.348625   17406 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 22:47:59.348780   17406 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 22:47:59.348866   17406 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 22:47:59.351490   17406 out.go:235]   - Generating certificates and keys ...
	I1001 22:47:59.351593   17406 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 22:47:59.351703   17406 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 22:47:59.351807   17406 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 22:47:59.351900   17406 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 22:47:59.351994   17406 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 22:47:59.352074   17406 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 22:47:59.352146   17406 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 22:47:59.352295   17406 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-003557 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 22:47:59.352367   17406 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 22:47:59.352516   17406 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-003557 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 22:47:59.352591   17406 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 22:47:59.352710   17406 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 22:47:59.352802   17406 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 22:47:59.352889   17406 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 22:47:59.352963   17406 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 22:47:59.353047   17406 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 22:47:59.353105   17406 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 22:47:59.353163   17406 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 22:47:59.353227   17406 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 22:47:59.353315   17406 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 22:47:59.353406   17406 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 22:47:59.354888   17406 out.go:235]   - Booting up control plane ...
	I1001 22:47:59.354976   17406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 22:47:59.355049   17406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 22:47:59.355106   17406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 22:47:59.355203   17406 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 22:47:59.355299   17406 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 22:47:59.355339   17406 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 22:47:59.355448   17406 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 22:47:59.355555   17406 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 22:47:59.355605   17406 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.595998ms
	I1001 22:47:59.355669   17406 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 22:47:59.355745   17406 kubeadm.go:310] [api-check] The API server is healthy after 4.501005191s
	I1001 22:47:59.355854   17406 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 22:47:59.356014   17406 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 22:47:59.356075   17406 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 22:47:59.356319   17406 kubeadm.go:310] [mark-control-plane] Marking the node addons-003557 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 22:47:59.356399   17406 kubeadm.go:310] [bootstrap-token] Using token: cmw5x7.tpys45wndsft8y8j
	I1001 22:47:59.358932   17406 out.go:235]   - Configuring RBAC rules ...
	I1001 22:47:59.359033   17406 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 22:47:59.359105   17406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 22:47:59.359243   17406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 22:47:59.359401   17406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 22:47:59.359501   17406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 22:47:59.359575   17406 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 22:47:59.359680   17406 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 22:47:59.359720   17406 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 22:47:59.359764   17406 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 22:47:59.359770   17406 kubeadm.go:310] 
	I1001 22:47:59.359821   17406 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 22:47:59.359827   17406 kubeadm.go:310] 
	I1001 22:47:59.359905   17406 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 22:47:59.359911   17406 kubeadm.go:310] 
	I1001 22:47:59.359932   17406 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 22:47:59.359985   17406 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 22:47:59.360033   17406 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 22:47:59.360042   17406 kubeadm.go:310] 
	I1001 22:47:59.360096   17406 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 22:47:59.360104   17406 kubeadm.go:310] 
	I1001 22:47:59.360151   17406 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 22:47:59.360161   17406 kubeadm.go:310] 
	I1001 22:47:59.360227   17406 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 22:47:59.360301   17406 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 22:47:59.360380   17406 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 22:47:59.360392   17406 kubeadm.go:310] 
	I1001 22:47:59.360518   17406 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 22:47:59.360636   17406 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 22:47:59.360660   17406 kubeadm.go:310] 
	I1001 22:47:59.360797   17406 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cmw5x7.tpys45wndsft8y8j \
	I1001 22:47:59.360926   17406 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:015f6e369eec431c1e019d1e80b3ccdc24b9f33f6d33017d1e56e33b1290b97b \
	I1001 22:47:59.360958   17406 kubeadm.go:310] 	--control-plane 
	I1001 22:47:59.360968   17406 kubeadm.go:310] 
	I1001 22:47:59.361088   17406 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 22:47:59.361096   17406 kubeadm.go:310] 
	I1001 22:47:59.361179   17406 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cmw5x7.tpys45wndsft8y8j \
	I1001 22:47:59.361299   17406 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:015f6e369eec431c1e019d1e80b3ccdc24b9f33f6d33017d1e56e33b1290b97b 
	I1001 22:47:59.361310   17406 cni.go:84] Creating CNI manager for ""
	I1001 22:47:59.361316   17406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 22:47:59.362823   17406 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 22:47:59.364160   17406 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 22:47:59.367784   17406 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 22:47:59.367802   17406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 22:47:59.385208   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 22:47:59.578317   17406 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 22:47:59.578397   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:47:59.578450   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-003557 minikube.k8s.io/updated_at=2024_10_01T22_47_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=addons-003557 minikube.k8s.io/primary=true
	I1001 22:47:59.584962   17406 ops.go:34] apiserver oom_adj: -16
	I1001 22:47:59.656946   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:00.157815   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:00.657621   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:01.157633   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:01.657846   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:02.156964   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:02.657855   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:03.157706   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:03.657568   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:04.157889   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:04.657850   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:04.721346   17406 kubeadm.go:1113] duration metric: took 5.142994627s to wait for elevateKubeSystemPrivileges
	I1001 22:48:04.721392   17406 kubeadm.go:394] duration metric: took 14.479499895s to StartCluster
	I1001 22:48:04.721417   17406 settings.go:142] acquiring lock: {Name:mk0d6ca98bed6b4aaaa1127bd072eb7aeabfdcd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:48:04.721541   17406 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9314/kubeconfig
	I1001 22:48:04.722040   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/kubeconfig: {Name:mk09b2ba83f78d625a17bbeb72a5433822606f82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:48:04.722283   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 22:48:04.722303   17406 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 22:48:04.722382   17406 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 22:48:04.722492   17406 addons.go:69] Setting yakd=true in profile "addons-003557"
	I1001 22:48:04.722529   17406 addons.go:234] Setting addon yakd=true in "addons-003557"
	I1001 22:48:04.722534   17406 config.go:182] Loaded profile config "addons-003557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:48:04.722562   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.722577   17406 addons.go:69] Setting inspektor-gadget=true in profile "addons-003557"
	I1001 22:48:04.722588   17406 addons.go:234] Setting addon inspektor-gadget=true in "addons-003557"
	I1001 22:48:04.722606   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.722792   17406 addons.go:69] Setting storage-provisioner=true in profile "addons-003557"
	I1001 22:48:04.722813   17406 addons.go:234] Setting addon storage-provisioner=true in "addons-003557"
	I1001 22:48:04.722846   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.723093   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723134   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723289   17406 addons.go:69] Setting volcano=true in profile "addons-003557"
	I1001 22:48:04.723321   17406 addons.go:234] Setting addon volcano=true in "addons-003557"
	I1001 22:48:04.723319   17406 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-003557"
	I1001 22:48:04.723362   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.723362   17406 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-003557"
	I1001 22:48:04.723371   17406 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-003557"
	I1001 22:48:04.723372   17406 addons.go:69] Setting metrics-server=true in profile "addons-003557"
	I1001 22:48:04.723402   17406 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-003557"
	I1001 22:48:04.723405   17406 addons.go:234] Setting addon metrics-server=true in "addons-003557"
	I1001 22:48:04.723427   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.723450   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.723655   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723841   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723850   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723898   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723984   17406 addons.go:69] Setting default-storageclass=true in profile "addons-003557"
	I1001 22:48:04.724027   17406 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-003557"
	I1001 22:48:04.724048   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.724317   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.724335   17406 addons.go:69] Setting volumesnapshots=true in profile "addons-003557"
	I1001 22:48:04.724351   17406 addons.go:234] Setting addon volumesnapshots=true in "addons-003557"
	I1001 22:48:04.725675   17406 out.go:177] * Verifying Kubernetes components...
	I1001 22:48:04.725814   17406 addons.go:69] Setting ingress=true in profile "addons-003557"
	I1001 22:48:04.725822   17406 addons.go:69] Setting ingress-dns=true in profile "addons-003557"
	I1001 22:48:04.725842   17406 addons.go:234] Setting addon ingress-dns=true in "addons-003557"
	I1001 22:48:04.725885   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.725932   17406 addons.go:234] Setting addon ingress=true in "addons-003557"
	I1001 22:48:04.725706   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.725976   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.726056   17406 addons.go:69] Setting cloud-spanner=true in profile "addons-003557"
	I1001 22:48:04.726094   17406 addons.go:234] Setting addon cloud-spanner=true in "addons-003557"
	I1001 22:48:04.726176   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.726610   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.727259   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.727275   17406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:48:04.727548   17406 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-003557"
	I1001 22:48:04.727631   17406 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-003557"
	I1001 22:48:04.727667   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.727927   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.728228   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.728461   17406 addons.go:69] Setting gcp-auth=true in profile "addons-003557"
	I1001 22:48:04.728502   17406 mustload.go:65] Loading cluster: addons-003557
	I1001 22:48:04.729831   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.724320   17406 addons.go:69] Setting registry=true in profile "addons-003557"
	I1001 22:48:04.731909   17406 addons.go:234] Setting addon registry=true in "addons-003557"
	I1001 22:48:04.731976   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.761349   17406 config.go:182] Loaded profile config "addons-003557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:48:04.761709   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.773679   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	W1001 22:48:04.774632   17406 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1001 22:48:04.776439   17406 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1001 22:48:04.780935   17406 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1001 22:48:04.782822   17406 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1001 22:48:04.782844   17406 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1001 22:48:04.782949   17406 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 22:48:04.782960   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.783297   17406 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 22:48:04.783310   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 22:48:04.783362   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.783379   17406 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1001 22:48:04.785676   17406 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 22:48:04.785698   17406 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 22:48:04.785762   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.786025   17406 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 22:48:04.786036   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1001 22:48:04.786073   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.803474   17406 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1001 22:48:04.809127   17406 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:04.809981   17406 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 22:48:04.810009   17406 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 22:48:04.810090   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.815601   17406 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1001 22:48:04.818138   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.836503   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.838435   17406 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-003557"
	I1001 22:48:04.838479   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.834687   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.838890   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.838991   17406 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:04.839037   17406 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 22:48:04.839062   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 22:48:04.841082   17406 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 22:48:04.841100   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 22:48:04.841147   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.841459   17406 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 22:48:04.841472   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 22:48:04.841515   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.841989   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 22:48:04.842004   17406 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 22:48:04.842050   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.840584   17406 addons.go:234] Setting addon default-storageclass=true in "addons-003557"
	I1001 22:48:04.842644   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.843130   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.848289   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.860375   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.862128   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 22:48:04.863039   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.864963   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 22:48:04.867128   17406 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1001 22:48:04.868190   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 22:48:04.868695   17406 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1001 22:48:04.870267   17406 out.go:177]   - Using image docker.io/registry:2.8.3
	I1001 22:48:04.870388   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 22:48:04.871849   17406 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 22:48:04.871869   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 22:48:04.871928   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.873552   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 22:48:04.874769   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 22:48:04.875803   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 22:48:04.876787   17406 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1001 22:48:04.876808   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 22:48:04.876871   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.877877   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 22:48:04.879027   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 22:48:04.879046   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 22:48:04.879108   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.896988   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.897772   17406 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 22:48:04.898037   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.899331   17406 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 22:48:04.899346   17406 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 22:48:04.899393   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.899943   17406 out.go:177]   - Using image docker.io/busybox:stable
	I1001 22:48:04.901330   17406 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 22:48:04.901350   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 22:48:04.901396   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.903647   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.906366   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.906901   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.911371   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.919919   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.923190   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	W1001 22:48:04.937748   17406 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1001 22:48:04.937784   17406 retry.go:31] will retry after 245.553478ms: ssh: handshake failed: EOF
	I1001 22:48:05.044460   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 22:48:05.149504   17406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 22:48:05.244763   17406 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 22:48:05.244790   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 22:48:05.247441   17406 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 22:48:05.247467   17406 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 22:48:05.255151   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 22:48:05.350397   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 22:48:05.353981   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 22:48:05.356688   17406 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 22:48:05.356772   17406 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 22:48:05.434081   17406 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1001 22:48:05.434111   17406 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1001 22:48:05.440774   17406 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 22:48:05.440799   17406 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 22:48:05.442127   17406 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 22:48:05.442153   17406 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 22:48:05.442498   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 22:48:05.447581   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 22:48:05.448702   17406 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 22:48:05.448767   17406 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 22:48:05.537858   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 22:48:05.634188   17406 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 22:48:05.634282   17406 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 22:48:05.634658   17406 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 22:48:05.634714   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 22:48:05.656103   17406 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 22:48:05.656196   17406 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 22:48:05.734394   17406 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 22:48:05.734486   17406 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 22:48:05.738382   17406 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1001 22:48:05.738465   17406 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1001 22:48:05.750681   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 22:48:05.834178   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 22:48:05.834260   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 22:48:05.853449   17406 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 22:48:05.853557   17406 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 22:48:05.854544   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 22:48:05.937457   17406 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1001 22:48:05.937544   17406 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1001 22:48:05.955138   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 22:48:06.040282   17406 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 22:48:06.040361   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 22:48:06.055459   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 22:48:06.055512   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 22:48:06.233415   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 22:48:06.334921   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 22:48:06.335013   17406 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 22:48:06.446553   17406 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1001 22:48:06.446583   17406 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1001 22:48:06.448065   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 22:48:06.448120   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 22:48:06.648982   17406 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:06.649091   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 22:48:06.744867   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 22:48:06.744964   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 22:48:06.839947   17406 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1001 22:48:06.840038   17406 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1001 22:48:06.847989   17406 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.803435883s)
	I1001 22:48:06.848109   17406 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1001 22:48:06.848297   17406 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.698707251s)
	I1001 22:48:06.850634   17406 node_ready.go:35] waiting up to 6m0s for node "addons-003557" to be "Ready" ...
	I1001 22:48:06.936095   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.680884918s)
	I1001 22:48:06.952185   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 22:48:06.952290   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 22:48:07.046029   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:07.152152   17406 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1001 22:48:07.152183   17406 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1001 22:48:07.253851   17406 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 22:48:07.253930   17406 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1001 22:48:07.546186   17406 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-003557" context rescaled to 1 replicas
	I1001 22:48:07.643139   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 22:48:07.643168   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 22:48:07.736064   17406 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 22:48:07.736092   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1001 22:48:07.843017   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 22:48:07.843045   17406 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 22:48:07.945987   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 22:48:08.149556   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 22:48:08.149657   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 22:48:08.347596   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 22:48:08.347625   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 22:48:08.453419   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 22:48:08.453461   17406 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 22:48:08.551170   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 22:48:08.934245   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:09.156296   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.805793746s)
	I1001 22:48:09.156443   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.80237087s)
	I1001 22:48:09.156545   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.714018563s)
	I1001 22:48:10.262196   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.814578716s)
	I1001 22:48:10.262228   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.724325869s)
	I1001 22:48:10.262241   17406 addons.go:475] Verifying addon ingress=true in "addons-003557"
	I1001 22:48:10.262295   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.511532565s)
	I1001 22:48:10.262344   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.407731054s)
	I1001 22:48:10.262424   17406 addons.go:475] Verifying addon registry=true in "addons-003557"
	I1001 22:48:10.262449   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.307281624s)
	I1001 22:48:10.262524   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.029014048s)
	I1001 22:48:10.262471   17406 addons.go:475] Verifying addon metrics-server=true in "addons-003557"
	I1001 22:48:10.263932   17406 out.go:177] * Verifying ingress addon...
	I1001 22:48:10.264861   17406 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-003557 service yakd-dashboard -n yakd-dashboard
	
	I1001 22:48:10.264948   17406 out.go:177] * Verifying registry addon...
	I1001 22:48:10.266635   17406 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 22:48:10.267328   17406 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 22:48:10.271443   17406 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 22:48:10.271467   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:10.271684   17406 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 22:48:10.271705   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:10.839083   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:10.840074   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:10.969733   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.923654926s)
	W1001 22:48:10.969775   17406 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 22:48:10.969799   17406 retry.go:31] will retry after 348.931901ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 22:48:10.969848   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.023813837s)
	I1001 22:48:11.273518   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:11.275365   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:11.300306   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.749082805s)
	I1001 22:48:11.300338   17406 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-003557"
	I1001 22:48:11.301903   17406 out.go:177] * Verifying csi-hostpath-driver addon...
	I1001 22:48:11.304072   17406 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 22:48:11.319594   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:11.336495   17406 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 22:48:11.336516   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:11.356466   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:11.770754   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:11.771281   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:11.807174   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:12.044346   17406 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 22:48:12.044431   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:12.064044   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:12.335702   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:12.336336   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:12.336575   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:12.455616   17406 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 22:48:12.556372   17406 addons.go:234] Setting addon gcp-auth=true in "addons-003557"
	I1001 22:48:12.556451   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:12.556977   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:12.575077   17406 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 22:48:12.575122   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:12.591755   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:12.770978   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:12.771705   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:12.806993   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:13.270247   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:13.270626   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:13.307899   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:13.771156   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:13.771818   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:13.834047   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:13.853263   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:14.077275   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.757629461s)
	I1001 22:48:14.077300   17406 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.502192166s)
	I1001 22:48:14.079522   17406 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:14.081106   17406 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1001 22:48:14.082569   17406 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 22:48:14.082599   17406 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 22:48:14.145508   17406 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 22:48:14.145533   17406 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 22:48:14.163241   17406 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 22:48:14.163268   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 22:48:14.180620   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 22:48:14.271227   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:14.272236   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:14.334806   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:14.773664   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:14.835304   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:14.836087   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:14.849597   17406 addons.go:475] Verifying addon gcp-auth=true in "addons-003557"
	I1001 22:48:14.851291   17406 out.go:177] * Verifying gcp-auth addon...
	I1001 22:48:14.853922   17406 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 22:48:14.873862   17406 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 22:48:14.873883   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:15.270338   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:15.270783   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:15.307246   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:15.356807   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:15.770632   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:15.771171   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:15.808020   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:15.854301   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:15.856746   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:16.269987   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:16.270526   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:16.307533   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:16.356514   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:16.769934   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:16.770151   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:16.807684   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:16.856036   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:17.270438   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:17.270876   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:17.306905   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:17.356020   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:17.770021   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:17.770447   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:17.807401   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:17.856483   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:18.270001   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:18.270263   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:18.307622   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:18.353927   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:18.356795   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:18.770243   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:18.770423   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:18.807182   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:18.856355   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:19.270293   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:19.270760   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:19.306983   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:19.356333   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:19.770236   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:19.770588   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:19.806931   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:19.856132   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:20.269764   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:20.270131   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:20.307295   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:20.356684   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:20.769977   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:20.770289   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:20.806998   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:20.854017   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:20.856021   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:21.269990   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:21.270671   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:21.307571   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:21.356215   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:21.770045   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:21.770457   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:21.807411   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:21.856441   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:22.270231   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:22.270560   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:22.306891   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:22.356306   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:22.770278   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:22.770714   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:22.806988   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:22.854310   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:22.856551   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:23.270719   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:23.271080   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:23.307136   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:23.356463   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:23.770257   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:23.771091   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:23.806871   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:23.856467   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:24.270272   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:24.270697   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:24.307124   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:24.356423   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:24.770236   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:24.770810   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:24.806984   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:24.856308   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:25.270107   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:25.270486   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:25.307817   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:25.354036   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:25.356090   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:25.770115   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:25.770452   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:25.807859   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:25.856181   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:26.270067   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:26.270486   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:26.307892   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:26.356119   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:26.770110   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:26.770450   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:26.807460   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:26.856438   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:27.269982   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:27.269994   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:27.307505   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:27.356505   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:27.770059   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:27.770173   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:27.807736   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:27.854192   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:27.856498   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:28.270727   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:28.271282   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:28.307457   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:28.356581   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:28.769998   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:28.770036   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:28.808059   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:28.856145   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:29.270651   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:29.271118   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:29.307124   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:29.356565   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:29.769995   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:29.770342   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:29.807833   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:29.854293   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:29.855938   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:30.269821   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:30.270065   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:30.307631   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:30.356474   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:30.770463   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:30.770939   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:30.807015   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:30.856096   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:31.269986   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:31.270454   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:31.307836   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:31.356734   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:31.770032   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:31.770050   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:31.807504   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:31.856727   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:32.269915   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:32.270833   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:32.307963   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:32.354561   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:32.356222   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:32.770407   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:32.770750   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:32.807981   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:32.856347   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:33.271288   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:33.271840   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:33.307145   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:33.356512   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:33.770649   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:33.770939   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:33.806929   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:33.856186   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:34.270071   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:34.270606   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:34.307793   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:34.356450   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:34.770276   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:34.770646   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:34.807747   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:34.854253   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:34.856401   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:35.271929   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:35.272433   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:35.307747   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:35.356210   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:35.770015   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:35.770592   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:35.807721   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:35.856131   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:36.270003   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:36.270485   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:36.307733   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:36.356195   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:36.770066   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:36.770619   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:36.807735   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:36.856197   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:37.270033   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:37.270462   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:37.307551   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:37.353739   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:37.356249   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:37.770085   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:37.770491   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:37.807719   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:37.856832   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:38.270224   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:38.270417   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:38.307708   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:38.356674   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:38.769885   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:38.770088   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:38.807819   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:38.856128   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:39.269829   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:39.270211   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:39.307494   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:39.356698   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:39.770098   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:39.770533   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:39.807823   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:39.853947   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:39.856073   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:40.269795   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:40.270324   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:40.307346   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:40.356454   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:40.770387   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:40.770930   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:40.807199   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:40.856106   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:41.269868   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:41.270414   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:41.307660   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:41.356475   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:41.770496   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:41.770779   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:41.806765   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:41.856401   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:42.270030   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:42.270465   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:42.307901   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:42.354151   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:42.356375   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:42.770367   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:42.770789   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:42.807048   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:42.856035   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:43.269881   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:43.270408   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:43.307546   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:43.356320   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:43.770303   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:43.771009   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:43.807005   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:43.856572   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:44.270083   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:44.270266   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:44.307819   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:44.354296   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:44.356206   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:44.769948   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:44.770600   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:44.807719   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:44.856153   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:45.269902   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:45.270601   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:45.307650   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:45.356201   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:45.770047   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:45.770413   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:45.807516   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:45.856880   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:46.270244   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:46.270513   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:46.307171   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:46.356894   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:46.837009   17406 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 22:48:46.837099   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:46.837636   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:46.838246   17406 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 22:48:46.838308   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:46.854470   17406 node_ready.go:49] node "addons-003557" has status "Ready":"True"
	I1001 22:48:46.854499   17406 node_ready.go:38] duration metric: took 40.003787077s for node "addons-003557" to be "Ready" ...
	I1001 22:48:46.854510   17406 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 22:48:46.856967   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:46.862357   17406 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6cj4k" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:47.270868   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:47.271080   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:47.370305   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:47.371426   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:47.771279   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:47.771648   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:47.808528   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:47.859040   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:48.270472   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:48.270676   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:48.308675   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:48.357484   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:48.368144   17406 pod_ready.go:93] pod "coredns-7c65d6cfc9-6cj4k" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.368167   17406 pod_ready.go:82] duration metric: took 1.50578281s for pod "coredns-7c65d6cfc9-6cj4k" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.368192   17406 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.372085   17406 pod_ready.go:93] pod "etcd-addons-003557" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.372104   17406 pod_ready.go:82] duration metric: took 3.902562ms for pod "etcd-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.372119   17406 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.375948   17406 pod_ready.go:93] pod "kube-apiserver-addons-003557" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.375967   17406 pod_ready.go:82] duration metric: took 3.84126ms for pod "kube-apiserver-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.375975   17406 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.380086   17406 pod_ready.go:93] pod "kube-controller-manager-addons-003557" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.380108   17406 pod_ready.go:82] duration metric: took 4.126277ms for pod "kube-controller-manager-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.380119   17406 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-69j2j" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.454950   17406 pod_ready.go:93] pod "kube-proxy-69j2j" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.454976   17406 pod_ready.go:82] duration metric: took 74.851467ms for pod "kube-proxy-69j2j" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.454987   17406 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.773077   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:48.773436   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:48.808014   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:48.855063   17406 pod_ready.go:93] pod "kube-scheduler-addons-003557" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.855086   17406 pod_ready.go:82] duration metric: took 400.091569ms for pod "kube-scheduler-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.855098   17406 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.856702   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:49.335998   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:49.336533   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:49.336939   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:49.359877   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:49.839078   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:49.839237   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:49.839310   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:49.938791   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:50.345027   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:50.346478   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:50.347147   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:50.357494   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:50.838482   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:50.838874   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:50.839621   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:50.858886   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:50.862241   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:51.270295   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:51.270601   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:51.336927   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:51.357967   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:51.770922   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:51.771059   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:51.836453   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:51.857686   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:52.270788   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:52.270934   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:52.336063   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:52.357048   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:52.770644   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:52.770913   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:52.808138   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:52.857323   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:53.270387   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:53.270475   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:53.308919   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:53.357904   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:53.360086   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:53.770330   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:53.770687   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:53.809429   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:53.857845   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:54.270502   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:54.270498   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:54.308584   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:54.357803   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:54.770928   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:54.771192   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:54.807750   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:54.857831   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:55.270553   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:55.270717   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:55.307464   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:55.357542   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:55.360155   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:55.770389   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:55.770752   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:55.807837   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:55.857751   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:56.270662   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:56.270901   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:56.308108   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:56.357686   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:56.770335   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:56.770629   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:56.808461   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:56.857327   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:57.347712   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:57.349227   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:57.350294   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:57.358360   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:57.437893   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:57.770818   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:57.771190   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:57.836857   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:57.857499   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:58.271017   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:58.271255   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:58.308013   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:58.357136   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:58.770774   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:58.770998   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:58.808838   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:58.858035   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:59.270849   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:59.271083   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:59.308576   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:59.357554   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:59.770620   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:59.770839   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:59.808862   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:59.858005   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:59.860381   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:00.270886   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:00.271415   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:00.308468   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:00.357595   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:00.773372   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:00.773966   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:00.835696   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:00.857469   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:01.270900   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:01.271430   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:01.308330   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:01.357305   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:01.770890   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:01.771293   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:01.836611   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:01.857533   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:02.270434   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:02.270678   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:02.308450   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:02.357390   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:02.360382   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:02.773069   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:02.773263   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:02.808329   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:02.857388   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:03.271040   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:03.271342   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:03.308106   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:03.357056   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:03.770223   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:03.770477   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:03.807853   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:03.856815   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:04.270731   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:04.271510   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:04.308013   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:04.357172   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:04.770355   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:04.770476   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:04.859693   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:04.870022   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:04.870688   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:05.270706   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:05.270950   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:05.307672   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:05.357503   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:05.770328   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:05.770491   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:05.808800   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:05.857910   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.270987   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:06.271489   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:06.334933   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:06.356867   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.770692   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:06.770722   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:06.836543   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:06.857444   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.860800   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:07.270459   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:07.272101   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:07.336262   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:07.356967   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:07.770289   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:07.770373   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:07.808875   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:07.857723   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:08.270632   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:08.270824   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:08.308919   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:08.357677   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:08.770660   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:08.770930   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:08.870894   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:08.872072   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:09.270587   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:09.270885   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:09.307904   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:09.357870   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:09.360027   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:09.770574   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:09.770783   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:09.808133   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:09.857833   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:10.271803   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:10.271988   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:10.337093   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:10.357638   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:10.771386   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:10.771743   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:10.836500   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:10.857965   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:11.270803   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:11.271699   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:11.309242   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:11.357426   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:11.360351   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:11.773846   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:11.774503   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:11.808005   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:11.856894   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:12.271214   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:12.271675   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:12.371473   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:12.372812   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:12.770818   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:12.770907   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:12.808107   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:12.857108   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:13.271083   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:13.271183   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:13.308623   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:13.357278   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:13.770812   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:13.770929   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:13.807756   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:13.861712   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:13.870668   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:14.272188   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:14.272874   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:14.337169   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:14.358021   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:14.770775   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:14.772065   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:14.808123   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:14.857368   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:15.270485   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:15.270690   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:15.308475   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:15.434687   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:15.771027   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:15.771293   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:15.808035   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:15.856731   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:16.270650   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:16.270965   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:16.308084   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:16.357740   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:16.360682   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:16.772053   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:16.773234   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:16.807829   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:16.873374   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:17.270946   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:17.271240   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:17.308132   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:17.357062   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:17.770655   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:17.770984   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:17.809163   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:17.856765   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:18.270066   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:18.270460   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:18.308408   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:18.356777   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:18.771346   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:18.771772   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:18.837119   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:18.860725   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:18.936800   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:19.270448   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:19.271053   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:19.308266   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:19.357722   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:19.771255   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:19.771498   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:19.808963   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:19.858064   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:20.270811   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:20.271902   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:20.335248   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:20.357353   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:20.771141   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:20.771512   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:20.809271   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:20.857803   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:21.270737   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:21.271005   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:21.307551   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:21.360326   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:21.371789   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:21.836601   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:21.837451   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:21.838103   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:21.858607   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:22.337567   17406 kapi.go:107] duration metric: took 1m12.070234926s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 22:49:22.337994   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:22.337991   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:22.357817   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:22.845550   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:22.849243   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:22.857437   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:23.336672   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:23.337254   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:23.359618   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:23.362724   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:23.771199   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:23.836551   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:23.857822   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:24.271163   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:24.335559   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:24.357599   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:24.770485   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:24.808027   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:24.857015   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:25.270777   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:25.309052   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:25.357842   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:25.771515   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:25.808258   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:25.857186   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:25.860833   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:26.271490   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:26.308389   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:26.357774   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:26.771369   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:26.807964   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:26.857046   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:27.275532   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:27.333670   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:27.375626   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:27.771294   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:27.808115   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:27.857299   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:28.270896   17406 kapi.go:107] duration metric: took 1m18.004259194s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 22:49:28.336478   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:28.357413   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:28.360085   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:28.808056   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:28.856872   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:29.309372   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:29.357060   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:29.836725   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:29.857414   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:30.308336   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:30.357183   17406 kapi.go:107] duration metric: took 1m15.503256547s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 22:49:30.359016   17406 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-003557 cluster.
	I1001 22:49:30.360475   17406 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 22:49:30.361943   17406 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 22:49:30.836931   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:30.861406   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:31.308572   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:31.809000   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:32.308347   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:32.839552   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:33.309028   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:33.360767   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:33.809095   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:34.308105   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:34.808624   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:35.308503   17406 kapi.go:107] duration metric: took 1m24.004430297s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 22:49:35.311229   17406 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, storage-provisioner-rancher, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1001 22:49:35.313435   17406 addons.go:510] duration metric: took 1m30.591052269s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner storage-provisioner-rancher ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1001 22:49:35.860038   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:38.360285   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:40.360731   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:42.860662   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:44.861364   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:47.360960   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:49.361415   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:51.860345   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:53.860554   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:55.861005   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:58.360325   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:00.360982   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:02.860466   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:05.361169   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:07.860795   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:10.360845   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:12.361030   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:14.361251   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:16.861153   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:17.360542   17406 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"True"
	I1001 22:50:17.360567   17406 pod_ready.go:82] duration metric: took 1m28.505461068s for pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace to be "Ready" ...
	I1001 22:50:17.360580   17406 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lqq7d" in "kube-system" namespace to be "Ready" ...
	I1001 22:50:17.364938   17406 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-lqq7d" in "kube-system" namespace has status "Ready":"True"
	I1001 22:50:17.364959   17406 pod_ready.go:82] duration metric: took 4.371492ms for pod "nvidia-device-plugin-daemonset-lqq7d" in "kube-system" namespace to be "Ready" ...
	I1001 22:50:17.364976   17406 pod_ready.go:39] duration metric: took 1m30.510453662s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 22:50:17.364992   17406 api_server.go:52] waiting for apiserver process to appear ...
	I1001 22:50:17.365019   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 22:50:17.365069   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 22:50:17.398297   17406 cri.go:89] found id: "87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:17.398316   17406 cri.go:89] found id: ""
	I1001 22:50:17.398323   17406 logs.go:282] 1 containers: [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f]
	I1001 22:50:17.398363   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.401435   17406 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 22:50:17.401495   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 22:50:17.433832   17406 cri.go:89] found id: "e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:17.433858   17406 cri.go:89] found id: ""
	I1001 22:50:17.433868   17406 logs.go:282] 1 containers: [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f]
	I1001 22:50:17.433927   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.437182   17406 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 22:50:17.437254   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 22:50:17.470922   17406 cri.go:89] found id: "d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:17.470943   17406 cri.go:89] found id: ""
	I1001 22:50:17.470951   17406 logs.go:282] 1 containers: [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955]
	I1001 22:50:17.471003   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.474323   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 22:50:17.474391   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 22:50:17.507143   17406 cri.go:89] found id: "8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:17.507167   17406 cri.go:89] found id: ""
	I1001 22:50:17.507174   17406 logs.go:282] 1 containers: [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca]
	I1001 22:50:17.507247   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.510488   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 22:50:17.510547   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 22:50:17.543241   17406 cri.go:89] found id: "1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:17.543264   17406 cri.go:89] found id: ""
	I1001 22:50:17.543274   17406 logs.go:282] 1 containers: [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1]
	I1001 22:50:17.543341   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.546521   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 22:50:17.546585   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 22:50:17.579896   17406 cri.go:89] found id: "7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:17.579916   17406 cri.go:89] found id: ""
	I1001 22:50:17.579925   17406 logs.go:282] 1 containers: [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101]
	I1001 22:50:17.579965   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.583212   17406 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 22:50:17.583278   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 22:50:17.616058   17406 cri.go:89] found id: "6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:17.616086   17406 cri.go:89] found id: ""
	I1001 22:50:17.616096   17406 logs.go:282] 1 containers: [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5]
	I1001 22:50:17.616147   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.619409   17406 logs.go:123] Gathering logs for container status ...
	I1001 22:50:17.619436   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 22:50:17.660545   17406 logs.go:123] Gathering logs for kubelet ...
	I1001 22:50:17.660572   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 22:50:17.746011   17406 logs.go:123] Gathering logs for describe nodes ...
	I1001 22:50:17.746044   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 22:50:17.842009   17406 logs.go:123] Gathering logs for kube-apiserver [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f] ...
	I1001 22:50:17.842040   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:17.885176   17406 logs.go:123] Gathering logs for coredns [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955] ...
	I1001 22:50:17.885206   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:17.919459   17406 logs.go:123] Gathering logs for kindnet [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5] ...
	I1001 22:50:17.919491   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:17.953281   17406 logs.go:123] Gathering logs for CRI-O ...
	I1001 22:50:17.953312   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 22:50:18.025721   17406 logs.go:123] Gathering logs for dmesg ...
	I1001 22:50:18.025755   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 22:50:18.038322   17406 logs.go:123] Gathering logs for etcd [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f] ...
	I1001 22:50:18.038357   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:18.088439   17406 logs.go:123] Gathering logs for kube-scheduler [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca] ...
	I1001 22:50:18.088475   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:18.127405   17406 logs.go:123] Gathering logs for kube-proxy [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1] ...
	I1001 22:50:18.127444   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:18.161259   17406 logs.go:123] Gathering logs for kube-controller-manager [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101] ...
	I1001 22:50:18.161286   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:20.716346   17406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 22:50:20.729648   17406 api_server.go:72] duration metric: took 2m16.007309449s to wait for apiserver process to appear ...
	I1001 22:50:20.729676   17406 api_server.go:88] waiting for apiserver healthz status ...
	I1001 22:50:20.729734   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 22:50:20.729781   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 22:50:20.761825   17406 cri.go:89] found id: "87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:20.761846   17406 cri.go:89] found id: ""
	I1001 22:50:20.761854   17406 logs.go:282] 1 containers: [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f]
	I1001 22:50:20.761897   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.765056   17406 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 22:50:20.765117   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 22:50:20.798095   17406 cri.go:89] found id: "e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:20.798121   17406 cri.go:89] found id: ""
	I1001 22:50:20.798131   17406 logs.go:282] 1 containers: [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f]
	I1001 22:50:20.798175   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.801381   17406 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 22:50:20.801441   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 22:50:20.833580   17406 cri.go:89] found id: "d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:20.833602   17406 cri.go:89] found id: ""
	I1001 22:50:20.833611   17406 logs.go:282] 1 containers: [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955]
	I1001 22:50:20.833659   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.836924   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 22:50:20.836978   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 22:50:20.870184   17406 cri.go:89] found id: "8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:20.870207   17406 cri.go:89] found id: ""
	I1001 22:50:20.870218   17406 logs.go:282] 1 containers: [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca]
	I1001 22:50:20.870265   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.873386   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 22:50:20.873448   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 22:50:20.906120   17406 cri.go:89] found id: "1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:20.906145   17406 cri.go:89] found id: ""
	I1001 22:50:20.906153   17406 logs.go:282] 1 containers: [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1]
	I1001 22:50:20.906210   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.909798   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 22:50:20.909856   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 22:50:20.942582   17406 cri.go:89] found id: "7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:20.942607   17406 cri.go:89] found id: ""
	I1001 22:50:20.942616   17406 logs.go:282] 1 containers: [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101]
	I1001 22:50:20.942662   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.945891   17406 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 22:50:20.945948   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 22:50:20.979406   17406 cri.go:89] found id: "6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:20.979425   17406 cri.go:89] found id: ""
	I1001 22:50:20.979440   17406 logs.go:282] 1 containers: [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5]
	I1001 22:50:20.979482   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.983203   17406 logs.go:123] Gathering logs for dmesg ...
	I1001 22:50:20.983232   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 22:50:20.995293   17406 logs.go:123] Gathering logs for kube-apiserver [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f] ...
	I1001 22:50:20.995318   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:21.038301   17406 logs.go:123] Gathering logs for coredns [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955] ...
	I1001 22:50:21.038336   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:21.074335   17406 logs.go:123] Gathering logs for kube-scheduler [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca] ...
	I1001 22:50:21.074365   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:21.113455   17406 logs.go:123] Gathering logs for kube-proxy [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1] ...
	I1001 22:50:21.113484   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:21.145704   17406 logs.go:123] Gathering logs for CRI-O ...
	I1001 22:50:21.145730   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 22:50:21.217978   17406 logs.go:123] Gathering logs for kubelet ...
	I1001 22:50:21.218016   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 22:50:21.299409   17406 logs.go:123] Gathering logs for describe nodes ...
	I1001 22:50:21.299444   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 22:50:21.395877   17406 logs.go:123] Gathering logs for etcd [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f] ...
	I1001 22:50:21.395909   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:21.444896   17406 logs.go:123] Gathering logs for kube-controller-manager [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101] ...
	I1001 22:50:21.444933   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:21.500566   17406 logs.go:123] Gathering logs for kindnet [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5] ...
	I1001 22:50:21.500599   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:21.535184   17406 logs.go:123] Gathering logs for container status ...
	I1001 22:50:21.535223   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 22:50:24.075896   17406 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1001 22:50:24.080316   17406 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1001 22:50:24.081179   17406 api_server.go:141] control plane version: v1.31.1
	I1001 22:50:24.081202   17406 api_server.go:131] duration metric: took 3.351518463s to wait for apiserver health ...
	I1001 22:50:24.081210   17406 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 22:50:24.081253   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 22:50:24.081298   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 22:50:24.115343   17406 cri.go:89] found id: "87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:24.115365   17406 cri.go:89] found id: ""
	I1001 22:50:24.115373   17406 logs.go:282] 1 containers: [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f]
	I1001 22:50:24.115415   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.118584   17406 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 22:50:24.118649   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 22:50:24.151635   17406 cri.go:89] found id: "e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:24.151658   17406 cri.go:89] found id: ""
	I1001 22:50:24.151666   17406 logs.go:282] 1 containers: [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f]
	I1001 22:50:24.151707   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.154924   17406 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 22:50:24.154990   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 22:50:24.187218   17406 cri.go:89] found id: "d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:24.187244   17406 cri.go:89] found id: ""
	I1001 22:50:24.187252   17406 logs.go:282] 1 containers: [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955]
	I1001 22:50:24.187293   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.190608   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 22:50:24.190666   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 22:50:24.222899   17406 cri.go:89] found id: "8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:24.222920   17406 cri.go:89] found id: ""
	I1001 22:50:24.222930   17406 logs.go:282] 1 containers: [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca]
	I1001 22:50:24.222983   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.226303   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 22:50:24.226358   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 22:50:24.259453   17406 cri.go:89] found id: "1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:24.259475   17406 cri.go:89] found id: ""
	I1001 22:50:24.259483   17406 logs.go:282] 1 containers: [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1]
	I1001 22:50:24.259573   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.262913   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 22:50:24.262976   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 22:50:24.297869   17406 cri.go:89] found id: "7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:24.297896   17406 cri.go:89] found id: ""
	I1001 22:50:24.297904   17406 logs.go:282] 1 containers: [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101]
	I1001 22:50:24.297945   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.301077   17406 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 22:50:24.301142   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 22:50:24.333856   17406 cri.go:89] found id: "6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:24.333880   17406 cri.go:89] found id: ""
	I1001 22:50:24.333887   17406 logs.go:282] 1 containers: [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5]
	I1001 22:50:24.333940   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.337244   17406 logs.go:123] Gathering logs for kube-apiserver [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f] ...
	I1001 22:50:24.337267   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:24.379044   17406 logs.go:123] Gathering logs for etcd [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f] ...
	I1001 22:50:24.379076   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:24.425711   17406 logs.go:123] Gathering logs for coredns [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955] ...
	I1001 22:50:24.425743   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:24.461124   17406 logs.go:123] Gathering logs for kube-proxy [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1] ...
	I1001 22:50:24.461153   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:24.494069   17406 logs.go:123] Gathering logs for kindnet [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5] ...
	I1001 22:50:24.494106   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:24.528045   17406 logs.go:123] Gathering logs for container status ...
	I1001 22:50:24.528072   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 22:50:24.568370   17406 logs.go:123] Gathering logs for kubelet ...
	I1001 22:50:24.568400   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 22:50:24.646437   17406 logs.go:123] Gathering logs for dmesg ...
	I1001 22:50:24.646466   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 22:50:24.658903   17406 logs.go:123] Gathering logs for describe nodes ...
	I1001 22:50:24.658929   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 22:50:24.754724   17406 logs.go:123] Gathering logs for kube-scheduler [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca] ...
	I1001 22:50:24.754763   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:24.794371   17406 logs.go:123] Gathering logs for kube-controller-manager [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101] ...
	I1001 22:50:24.794403   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:24.852996   17406 logs.go:123] Gathering logs for CRI-O ...
	I1001 22:50:24.853034   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 22:50:27.432706   17406 system_pods.go:59] 18 kube-system pods found
	I1001 22:50:27.432747   17406 system_pods.go:61] "coredns-7c65d6cfc9-6cj4k" [bf0ce726-12dc-4a3b-bc7f-32b08162b072] Running
	I1001 22:50:27.432755   17406 system_pods.go:61] "csi-hostpath-attacher-0" [4d8a45e2-a65d-4d14-a0a5-61b0459194c8] Running
	I1001 22:50:27.432760   17406 system_pods.go:61] "csi-hostpath-resizer-0" [22ade8db-a4ea-45b8-99b2-fe431c97ecbb] Running
	I1001 22:50:27.432763   17406 system_pods.go:61] "csi-hostpathplugin-9hpwk" [9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c] Running
	I1001 22:50:27.432766   17406 system_pods.go:61] "etcd-addons-003557" [7b441ad9-1688-4020-b102-e367f37ff777] Running
	I1001 22:50:27.432770   17406 system_pods.go:61] "kindnet-8kp67" [c4f42c69-ca3e-4f26-acba-51f300a26d2e] Running
	I1001 22:50:27.432773   17406 system_pods.go:61] "kube-apiserver-addons-003557" [b68e3152-1b85-418a-acc5-62457bc07f17] Running
	I1001 22:50:27.432776   17406 system_pods.go:61] "kube-controller-manager-addons-003557" [fc7f4c52-6e3f-4f19-a2e0-965c373e53d9] Running
	I1001 22:50:27.432780   17406 system_pods.go:61] "kube-ingress-dns-minikube" [c5f5b225-07e7-4b1f-ad82-4e969170fdf5] Running
	I1001 22:50:27.432784   17406 system_pods.go:61] "kube-proxy-69j2j" [fb59b533-053f-480b-85dd-f485ad873034] Running
	I1001 22:50:27.432789   17406 system_pods.go:61] "kube-scheduler-addons-003557" [84ef7897-c5a4-4d4f-a0b7-3945ae61ea50] Running
	I1001 22:50:27.432792   17406 system_pods.go:61] "metrics-server-84c5f94fbc-zjg7c" [f8da0c14-1d24-402d-bcbd-d93fe9f23cc3] Running
	I1001 22:50:27.432796   17406 system_pods.go:61] "nvidia-device-plugin-daemonset-lqq7d" [96398da4-ed1b-465f-b551-4e9610a5a0b8] Running
	I1001 22:50:27.432799   17406 system_pods.go:61] "registry-66c9cd494c-nfhms" [6ea2ddd1-36cb-436c-8115-e19051d864b9] Running
	I1001 22:50:27.432802   17406 system_pods.go:61] "registry-proxy-b56zl" [927b1333-9d83-4da6-a33d-da374985f3f3] Running
	I1001 22:50:27.432806   17406 system_pods.go:61] "snapshot-controller-56fcc65765-r564p" [d04ff238-8840-4705-b74a-704495659229] Running
	I1001 22:50:27.432812   17406 system_pods.go:61] "snapshot-controller-56fcc65765-x5lsc" [f059396a-b1f2-4395-91fa-a812a3df93ca] Running
	I1001 22:50:27.432815   17406 system_pods.go:61] "storage-provisioner" [20cd141c-f893-40e3-ab7d-39590c85f67d] Running
	I1001 22:50:27.432820   17406 system_pods.go:74] duration metric: took 3.35158191s to wait for pod list to return data ...
	I1001 22:50:27.432830   17406 default_sa.go:34] waiting for default service account to be created ...
	I1001 22:50:27.435490   17406 default_sa.go:45] found service account: "default"
	I1001 22:50:27.435513   17406 default_sa.go:55] duration metric: took 2.67801ms for default service account to be created ...
	I1001 22:50:27.435526   17406 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 22:50:27.444043   17406 system_pods.go:86] 18 kube-system pods found
	I1001 22:50:27.444077   17406 system_pods.go:89] "coredns-7c65d6cfc9-6cj4k" [bf0ce726-12dc-4a3b-bc7f-32b08162b072] Running
	I1001 22:50:27.444083   17406 system_pods.go:89] "csi-hostpath-attacher-0" [4d8a45e2-a65d-4d14-a0a5-61b0459194c8] Running
	I1001 22:50:27.444087   17406 system_pods.go:89] "csi-hostpath-resizer-0" [22ade8db-a4ea-45b8-99b2-fe431c97ecbb] Running
	I1001 22:50:27.444093   17406 system_pods.go:89] "csi-hostpathplugin-9hpwk" [9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c] Running
	I1001 22:50:27.444097   17406 system_pods.go:89] "etcd-addons-003557" [7b441ad9-1688-4020-b102-e367f37ff777] Running
	I1001 22:50:27.444100   17406 system_pods.go:89] "kindnet-8kp67" [c4f42c69-ca3e-4f26-acba-51f300a26d2e] Running
	I1001 22:50:27.444105   17406 system_pods.go:89] "kube-apiserver-addons-003557" [b68e3152-1b85-418a-acc5-62457bc07f17] Running
	I1001 22:50:27.444109   17406 system_pods.go:89] "kube-controller-manager-addons-003557" [fc7f4c52-6e3f-4f19-a2e0-965c373e53d9] Running
	I1001 22:50:27.444113   17406 system_pods.go:89] "kube-ingress-dns-minikube" [c5f5b225-07e7-4b1f-ad82-4e969170fdf5] Running
	I1001 22:50:27.444116   17406 system_pods.go:89] "kube-proxy-69j2j" [fb59b533-053f-480b-85dd-f485ad873034] Running
	I1001 22:50:27.444120   17406 system_pods.go:89] "kube-scheduler-addons-003557" [84ef7897-c5a4-4d4f-a0b7-3945ae61ea50] Running
	I1001 22:50:27.444124   17406 system_pods.go:89] "metrics-server-84c5f94fbc-zjg7c" [f8da0c14-1d24-402d-bcbd-d93fe9f23cc3] Running
	I1001 22:50:27.444127   17406 system_pods.go:89] "nvidia-device-plugin-daemonset-lqq7d" [96398da4-ed1b-465f-b551-4e9610a5a0b8] Running
	I1001 22:50:27.444131   17406 system_pods.go:89] "registry-66c9cd494c-nfhms" [6ea2ddd1-36cb-436c-8115-e19051d864b9] Running
	I1001 22:50:27.444135   17406 system_pods.go:89] "registry-proxy-b56zl" [927b1333-9d83-4da6-a33d-da374985f3f3] Running
	I1001 22:50:27.444138   17406 system_pods.go:89] "snapshot-controller-56fcc65765-r564p" [d04ff238-8840-4705-b74a-704495659229] Running
	I1001 22:50:27.444143   17406 system_pods.go:89] "snapshot-controller-56fcc65765-x5lsc" [f059396a-b1f2-4395-91fa-a812a3df93ca] Running
	I1001 22:50:27.444146   17406 system_pods.go:89] "storage-provisioner" [20cd141c-f893-40e3-ab7d-39590c85f67d] Running
	I1001 22:50:27.444152   17406 system_pods.go:126] duration metric: took 8.621679ms to wait for k8s-apps to be running ...
	I1001 22:50:27.444161   17406 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 22:50:27.444210   17406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 22:50:27.455667   17406 system_svc.go:56] duration metric: took 11.494257ms WaitForService to wait for kubelet
	I1001 22:50:27.455700   17406 kubeadm.go:582] duration metric: took 2m22.733365482s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 22:50:27.455724   17406 node_conditions.go:102] verifying NodePressure condition ...
	I1001 22:50:27.458744   17406 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1001 22:50:27.458792   17406 node_conditions.go:123] node cpu capacity is 8
	I1001 22:50:27.458812   17406 node_conditions.go:105] duration metric: took 3.081332ms to run NodePressure ...
	I1001 22:50:27.458828   17406 start.go:241] waiting for startup goroutines ...
	I1001 22:50:27.458837   17406 start.go:246] waiting for cluster config update ...
	I1001 22:50:27.458859   17406 start.go:255] writing updated cluster config ...
	I1001 22:50:27.459165   17406 ssh_runner.go:195] Run: rm -f paused
	I1001 22:50:27.508443   17406 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 22:50:27.510465   17406 out.go:177] * Done! kubectl is now configured to use "addons-003557" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 23:01:01 addons-003557 crio[1033]: time="2024-10-01 23:01:01.642143332Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=27334abc-422f-4f1b-9e8f-19317c650f5f name=/runtime.v1.ImageService/PullImage
	Oct 01 23:01:01 addons-003557 crio[1033]: time="2024-10-01 23:01:01.643501082Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 01 23:01:01 addons-003557 crio[1033]: time="2024-10-01 23:01:01.968164229Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 01 23:01:02 addons-003557 crio[1033]: time="2024-10-01 23:01:02.669688902Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e" id=27334abc-422f-4f1b-9e8f-19317c650f5f name=/runtime.v1.ImageService/PullImage
	Oct 01 23:01:02 addons-003557 crio[1033]: time="2024-10-01 23:01:02.670214216Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=11500386-bead-4510-b5ef-5e06257cbf40 name=/runtime.v1.ImageService/ImageStatus
	Oct 01 23:01:02 addons-003557 crio[1033]: time="2024-10-01 23:01:02.670816348Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=11500386-bead-4510-b5ef-5e06257cbf40 name=/runtime.v1.ImageService/ImageStatus
	Oct 01 23:01:02 addons-003557 crio[1033]: time="2024-10-01 23:01:02.671543169Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1a13c2f5-0194-473f-a700-e4a39c57ff9a name=/runtime.v1.ImageService/ImageStatus
	Oct 01 23:01:02 addons-003557 crio[1033]: time="2024-10-01 23:01:02.672143572Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1a13c2f5-0194-473f-a700-e4a39c57ff9a name=/runtime.v1.ImageService/ImageStatus
	Oct 01 23:01:02 addons-003557 crio[1033]: time="2024-10-01 23:01:02.672907224Z" level=info msg="Creating container: default/busybox/busybox" id=fef0afe9-a3bf-42aa-91b7-02eeb022ad9d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 01 23:01:02 addons-003557 crio[1033]: time="2024-10-01 23:01:02.672991743Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 01 23:01:02 addons-003557 crio[1033]: time="2024-10-01 23:01:02.725226990Z" level=info msg="Created container 498059f0354298537857b7c75a86ae157891f6f5fc2f4b817f6822c0e1afe7c5: default/busybox/busybox" id=fef0afe9-a3bf-42aa-91b7-02eeb022ad9d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 01 23:01:02 addons-003557 crio[1033]: time="2024-10-01 23:01:02.725808825Z" level=info msg="Starting container: 498059f0354298537857b7c75a86ae157891f6f5fc2f4b817f6822c0e1afe7c5" id=f2138976-84da-41d1-9f15-50a5014f5ac2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 01 23:01:02 addons-003557 crio[1033]: time="2024-10-01 23:01:02.731577496Z" level=info msg="Started container" PID=17500 containerID=498059f0354298537857b7c75a86ae157891f6f5fc2f4b817f6822c0e1afe7c5 description=default/busybox/busybox id=f2138976-84da-41d1-9f15-50a5014f5ac2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae208a41461872dbe2c2a9c30f9751969aeb6ed8846bf1d025695fe8f7201faf
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.266297136Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-m668q/POD" id=c4122266-3602-4a53-ad24-61c03520925f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.266349940Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.288563046Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-m668q Namespace:default ID:0202013e728543d864a5639766c6c88d24b3a7a1c31288462279ef8a027a7ade UID:7eec5510-dee3-446b-a3c2-bd081b192280 NetNS:/var/run/netns/b9559f6c-bf03-43df-815e-ea52f75030c2 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.288598622Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-m668q to CNI network \"kindnet\" (type=ptp)"
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.334723519Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-m668q Namespace:default ID:0202013e728543d864a5639766c6c88d24b3a7a1c31288462279ef8a027a7ade UID:7eec5510-dee3-446b-a3c2-bd081b192280 NetNS:/var/run/netns/b9559f6c-bf03-43df-815e-ea52f75030c2 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.334904521Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-m668q for CNI network kindnet (type=ptp)"
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.337294385Z" level=info msg="Ran pod sandbox 0202013e728543d864a5639766c6c88d24b3a7a1c31288462279ef8a027a7ade with infra container: default/hello-world-app-55bf9c44b4-m668q/POD" id=c4122266-3602-4a53-ad24-61c03520925f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.338419610Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d0c36e6c-8c70-4b30-8c8a-23ab9f56af9d name=/runtime.v1.ImageService/ImageStatus
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.338675995Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d0c36e6c-8c70-4b30-8c8a-23ab9f56af9d name=/runtime.v1.ImageService/ImageStatus
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.339156867Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=555c7b7c-bbe1-4c26-89b6-4b981257e73b name=/runtime.v1.ImageService/PullImage
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.355975923Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 01 23:01:41 addons-003557 crio[1033]: time="2024-10-01 23:01:41.843786920Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	498059f035429       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          39 seconds ago      Running             busybox                   0                   ae208a4146187       busybox
	2bd3c57ae8b07       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   73e2c96114943       nginx
	2948a71e700fb       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             12 minutes ago      Running             controller                0                   1a9ab1c459822       ingress-nginx-controller-bc57996ff-bmzvg
	3b788fdf0d4ff       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             12 minutes ago      Exited              patch                     2                   727bf92a44730       ingress-nginx-admission-patch-9cw6h
	6430ee5d1d00b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   df31a94b355c1       ingress-nginx-admission-create-xvl5n
	1f273bc376231       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   89e385fcf8343       metrics-server-84c5f94fbc-zjg7c
	85441baad1e83       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             12 minutes ago      Running             minikube-ingress-dns      0                   e51071ef17b5a       kube-ingress-dns-minikube
	f56a3f3f14126       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   2970730c13a9c       storage-provisioner
	d0bb0bd096899       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   71e46a2f64bf1       coredns-7c65d6cfc9-6cj4k
	6c8c4e3c950ae       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                             13 minutes ago      Running             kindnet-cni               0                   c9b5341189508       kindnet-8kp67
	1dd0b703f1047       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   a3b24715e9571       kube-proxy-69j2j
	87752c9368125       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   e698fbb913e3d       kube-apiserver-addons-003557
	e2475f6c19b3e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   d39ade4a2c9ba       etcd-addons-003557
	8cfd176ea2dd2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   721511daf2b97       kube-scheduler-addons-003557
	7c7968828a881       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   4cc08c42bd513       kube-controller-manager-addons-003557
	
	
	==> coredns [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955] <==
	[INFO] 10.244.0.17:42163 - 24658 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000037012s
	[INFO] 10.244.0.17:49887 - 52595 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005027765s
	[INFO] 10.244.0.17:49887 - 52840 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005270421s
	[INFO] 10.244.0.17:39353 - 40702 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005922891s
	[INFO] 10.244.0.17:39353 - 41038 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006339565s
	[INFO] 10.244.0.17:56159 - 8350 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006122576s
	[INFO] 10.244.0.17:56159 - 8049 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006181843s
	[INFO] 10.244.0.17:42188 - 63793 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000081522s
	[INFO] 10.244.0.17:42188 - 63546 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000119576s
	[INFO] 10.244.0.20:60178 - 38112 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000190932s
	[INFO] 10.244.0.20:44060 - 45157 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000240167s
	[INFO] 10.244.0.20:37280 - 38436 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156709s
	[INFO] 10.244.0.20:45800 - 21952 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167984s
	[INFO] 10.244.0.20:42087 - 56479 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123659s
	[INFO] 10.244.0.20:46640 - 6497 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000159705s
	[INFO] 10.244.0.20:56227 - 13377 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005494977s
	[INFO] 10.244.0.20:44664 - 46658 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005950547s
	[INFO] 10.244.0.20:54249 - 7687 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006744797s
	[INFO] 10.244.0.20:56222 - 13733 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.009048405s
	[INFO] 10.244.0.20:44534 - 44603 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006895595s
	[INFO] 10.244.0.20:33235 - 32712 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007239228s
	[INFO] 10.244.0.20:58455 - 42586 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000975592s
	[INFO] 10.244.0.20:47517 - 24313 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001055557s
	[INFO] 10.244.0.23:33292 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000190952s
	[INFO] 10.244.0.23:60867 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000132294s
	
	
	==> describe nodes <==
	Name:               addons-003557
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-003557
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=addons-003557
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T22_47_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-003557
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 22:47:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-003557
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:01:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:01:36 +0000   Tue, 01 Oct 2024 22:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:01:36 +0000   Tue, 01 Oct 2024 22:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:01:36 +0000   Tue, 01 Oct 2024 22:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:01:36 +0000   Tue, 01 Oct 2024 22:48:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-003557
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 9ed29642d6d04c75b61a758c77e1d89f
	  System UUID:                f2297c5c-adbc-484a-bd16-a1531a553d6e
	  Boot ID:                    47cfe39a-81d3-44ee-8311-5ab31cab672f
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-m668q            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-bmzvg    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-6cj4k                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-addons-003557                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-8kp67                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-addons-003557                250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-003557       200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-69j2j                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-003557                100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-zjg7c             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         13m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-003557 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-003557 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-003557 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node addons-003557 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node addons-003557 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node addons-003557 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                node-controller  Node addons-003557 event: Registered Node addons-003557 in Controller
	  Normal   NodeReady                12m                kubelet          Node addons-003557 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000749] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000737] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000717] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000641] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000640] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000640] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000724] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.651075] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.054572] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.030598] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.215174] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 1 22:59] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[  +1.003835] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[  +2.015821] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[  +4.255579] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[  +8.191302] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[Oct 1 23:00] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[ +32.513132] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	
	
	==> etcd [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f] <==
	{"level":"info","ts":"2024-10-01T22:48:08.041836Z","caller":"traceutil/trace.go:171","msg":"trace[2093571794] range","detail":"{range_begin:/registry/minions/addons-003557; range_end:; response_count:1; response_revision:422; }","duration":"101.727606ms","start":"2024-10-01T22:48:07.940099Z","end":"2024-10-01T22:48:08.041827Z","steps":["trace[2093571794] 'agreement among raft nodes before linearized reading'  (duration: 101.621649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.041992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.622219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:48:08.042058Z","caller":"traceutil/trace.go:171","msg":"trace[797429483] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:422; }","duration":"104.693675ms","start":"2024-10-01T22:48:07.937356Z","end":"2024-10-01T22:48:08.042049Z","steps":["trace[797429483] 'agreement among raft nodes before linearized reading'  (duration: 104.60761ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.042208Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.2984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-addons-003557\" ","response":"range_response_count:1 size:7632"}
	{"level":"info","ts":"2024-10-01T22:48:08.042287Z","caller":"traceutil/trace.go:171","msg":"trace[1949431822] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-addons-003557; range_end:; response_count:1; response_revision:422; }","duration":"105.377422ms","start":"2024-10-01T22:48:07.936901Z","end":"2024-10-01T22:48:08.042278Z","steps":["trace[1949431822] 'agreement among raft nodes before linearized reading'  (duration: 105.274394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.042440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.041417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:48:08.042493Z","caller":"traceutil/trace.go:171","msg":"trace[619821307] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:422; }","duration":"309.096108ms","start":"2024-10-01T22:48:07.733390Z","end":"2024-10-01T22:48:08.042486Z","steps":["trace[619821307] 'agreement among raft nodes before linearized reading'  (duration: 309.027271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.042544Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:48:07.733351Z","time spent":"309.18501ms","remote":"127.0.0.1:50540","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/kube-system/registry\" "}
	{"level":"warn","ts":"2024-10-01T22:48:08.042690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"404.064854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-vt4wh\" ","response":"range_response_count:1 size:3993"}
	{"level":"info","ts":"2024-10-01T22:48:08.042743Z","caller":"traceutil/trace.go:171","msg":"trace[1229854174] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-vt4wh; range_end:; response_count:1; response_revision:422; }","duration":"404.117853ms","start":"2024-10-01T22:48:07.638618Z","end":"2024-10-01T22:48:08.042736Z","steps":["trace[1229854174] 'agreement among raft nodes before linearized reading'  (duration: 404.04266ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.042790Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:48:07.638603Z","time spent":"404.1809ms","remote":"127.0.0.1:50280","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4017,"request content":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-vt4wh\" "}
	{"level":"warn","ts":"2024-10-01T22:48:08.042967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"404.38146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3144"}
	{"level":"info","ts":"2024-10-01T22:48:08.043022Z","caller":"traceutil/trace.go:171","msg":"trace[1034155169] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:422; }","duration":"404.435476ms","start":"2024-10-01T22:48:07.638579Z","end":"2024-10-01T22:48:08.043014Z","steps":["trace[1034155169] 'agreement among raft nodes before linearized reading'  (duration: 404.359529ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.043079Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:48:07.638539Z","time spent":"404.533248ms","remote":"127.0.0.1:50540","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":1,"response size":3168,"request content":"key:\"/registry/deployments/default/cloud-spanner-emulator\" "}
	{"level":"warn","ts":"2024-10-01T22:48:08.043214Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"407.535296ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:48:08.043279Z","caller":"traceutil/trace.go:171","msg":"trace[690083644] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:422; }","duration":"407.600156ms","start":"2024-10-01T22:48:07.635671Z","end":"2024-10-01T22:48:08.043271Z","steps":["trace[690083644] 'agreement among raft nodes before linearized reading'  (duration: 407.523911ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:48:08.145870Z","caller":"traceutil/trace.go:171","msg":"trace[1573253521] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"101.911502ms","start":"2024-10-01T22:48:08.043940Z","end":"2024-10-01T22:48:08.145851Z","steps":["trace[1573253521] 'process raft request'  (duration: 96.492529ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.146592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.506136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"info","ts":"2024-10-01T22:48:08.148168Z","caller":"traceutil/trace.go:171","msg":"trace[313351885] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:427; }","duration":"104.085949ms","start":"2024-10-01T22:48:08.044066Z","end":"2024-10-01T22:48:08.148152Z","steps":["trace[313351885] 'agreement among raft nodes before linearized reading'  (duration: 102.489858ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:49:38.865298Z","caller":"traceutil/trace.go:171","msg":"trace[1592076025] transaction","detail":"{read_only:false; response_revision:1197; number_of_response:1; }","duration":"110.479447ms","start":"2024-10-01T22:49:38.754796Z","end":"2024-10-01T22:49:38.865275Z","steps":["trace[1592076025] 'process raft request'  (duration: 110.298004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:49:50.165142Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.700411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-01T22:49:50.165212Z","caller":"traceutil/trace.go:171","msg":"trace[1736816869] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1219; }","duration":"108.782713ms","start":"2024-10-01T22:49:50.056414Z","end":"2024-10-01T22:49:50.165196Z","steps":["trace[1736816869] 'range keys from in-memory index tree'  (duration: 108.568939ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:57:55.062972Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1534}
	{"level":"info","ts":"2024-10-01T22:57:55.084720Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1534,"took":"21.290473ms","hash":3361936497,"current-db-size-bytes":6246400,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3198976,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-10-01T22:57:55.084768Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3361936497,"revision":1534,"compact-revision":-1}
	
	
	==> kernel <==
	 23:01:42 up 44 min,  0 users,  load average: 0.11, 0.28, 0.29
	Linux addons-003557 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5] <==
	I1001 22:59:36.239667       1 main.go:299] handling current node
	I1001 22:59:46.240704       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 22:59:46.240735       1 main.go:299] handling current node
	I1001 22:59:56.239867       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 22:59:56.239924       1 main.go:299] handling current node
	I1001 23:00:06.239368       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:00:06.239403       1 main.go:299] handling current node
	I1001 23:00:16.242006       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:00:16.242048       1 main.go:299] handling current node
	I1001 23:00:26.240088       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:00:26.240142       1 main.go:299] handling current node
	I1001 23:00:36.239675       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:00:36.239717       1 main.go:299] handling current node
	I1001 23:00:46.241099       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:00:46.241138       1 main.go:299] handling current node
	I1001 23:00:56.240122       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:00:56.240157       1 main.go:299] handling current node
	I1001 23:01:06.240042       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:01:06.240070       1 main.go:299] handling current node
	I1001 23:01:16.239131       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:01:16.239160       1 main.go:299] handling current node
	I1001 23:01:26.239163       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:01:26.239205       1 main.go:299] handling current node
	I1001 23:01:36.239291       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:01:36.239350       1 main.go:299] handling current node
	
	
	==> kube-apiserver [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f] <==
	I1001 22:50:17.280293       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1001 22:58:38.852087       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1001 22:58:39.553141       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.10.162"}
	I1001 22:59:09.222562       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1001 22:59:12.619387       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1001 22:59:12.625270       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1001 22:59:12.631150       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1001 22:59:17.269202       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1001 22:59:18.290368       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1001 22:59:22.735439       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1001 22:59:22.942054       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.45.72"}
	E1001 22:59:27.631536       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1001 22:59:28.301512       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:59:28.301560       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:59:28.314793       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:59:28.314834       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:59:28.315644       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:59:28.355369       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:59:28.355420       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:59:28.441304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:59:28.441343       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1001 22:59:29.332922       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1001 22:59:29.441654       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1001 22:59:29.549574       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1001 23:01:41.154041       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.51.148"}
	
	
	==> kube-controller-manager [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101] <==
	W1001 23:00:04.748027       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:04.748067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:00:10.229997       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:10.230039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:00:24.849708       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:24.849750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:00:30.340038       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:30.340087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:00:41.673519       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:41.673565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:00:49.144027       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:49.144073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:01:04.631393       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:01:04.631438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:01:05.137570       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:01:05.137621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:01:23.585469       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:01:23.585515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:01:34.155383       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:01:34.155428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1001 23:01:36.156132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-003557"
	I1001 23:01:40.965887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="16.64561ms"
	I1001 23:01:40.971704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.766792ms"
	I1001 23:01:40.971772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.816µs"
	I1001 23:01:40.973473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="43.111µs"
	
	
	==> kube-proxy [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1] <==
	I1001 22:48:05.557957       1 server_linux.go:66] "Using iptables proxy"
	I1001 22:48:06.935938       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1001 22:48:06.936105       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 22:48:08.335516       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1001 22:48:08.335603       1 server_linux.go:169] "Using iptables Proxier"
	I1001 22:48:08.439322       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 22:48:08.442907       1 server.go:483] "Version info" version="v1.31.1"
	I1001 22:48:08.442943       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 22:48:08.444401       1 config.go:199] "Starting service config controller"
	I1001 22:48:08.445143       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 22:48:08.444927       1 config.go:328] "Starting node config controller"
	I1001 22:48:08.445294       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 22:48:08.444426       1 config.go:105] "Starting endpoint slice config controller"
	I1001 22:48:08.445345       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 22:48:08.546666       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 22:48:08.549131       1 shared_informer.go:320] Caches are synced for service config
	I1001 22:48:08.549158       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca] <==
	E1001 22:47:56.636501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:56.636228       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 22:47:56.636533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:56.636224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 22:47:56.636538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1001 22:47:56.636567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:56.636282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 22:47:56.636953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:56.637015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 22:47:56.637054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:56.637119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 22:47:56.637156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.472310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 22:47:57.472345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.503661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 22:47:57.503697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.515984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 22:47:57.516025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.584924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 22:47:57.584969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.636284       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 22:47:57.636323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.643587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 22:47:57.643635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 22:47:57.959820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963854    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f059396a-b1f2-4395-91fa-a812a3df93ca" containerName="volume-snapshot-controller"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963878    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="liveness-probe"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963889    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="hostpath"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963898    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22ade8db-a4ea-45b8-99b2-fe431c97ecbb" containerName="csi-resizer"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963909    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="csi-provisioner"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963917    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="csi-snapshotter"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963927    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="902bbaeb-ddb2-4e5e-8ddb-cfd94b0bb030" containerName="local-path-provisioner"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963936    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="csi-external-health-monitor-controller"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963947    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d04ff238-8840-4705-b74a-704495659229" containerName="volume-snapshot-controller"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963957    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92efad93-7972-47ea-b7b0-be4f6240c386" containerName="task-pv-container"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963969    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d8a45e2-a65d-4d14-a0a5-61b0459194c8" containerName="csi-attacher"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: E1001 23:01:40.963978    1628 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="node-driver-registrar"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964016    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="csi-provisioner"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964026    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d8a45e2-a65d-4d14-a0a5-61b0459194c8" containerName="csi-attacher"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964035    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="f059396a-b1f2-4395-91fa-a812a3df93ca" containerName="volume-snapshot-controller"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964042    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="22ade8db-a4ea-45b8-99b2-fe431c97ecbb" containerName="csi-resizer"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964050    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="node-driver-registrar"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964057    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="92efad93-7972-47ea-b7b0-be4f6240c386" containerName="task-pv-container"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964064    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04ff238-8840-4705-b74a-704495659229" containerName="volume-snapshot-controller"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964072    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="liveness-probe"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964080    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="hostpath"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964088    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="csi-snapshotter"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964096    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="902bbaeb-ddb2-4e5e-8ddb-cfd94b0bb030" containerName="local-path-provisioner"
	Oct 01 23:01:40 addons-003557 kubelet[1628]: I1001 23:01:40.964104    1628 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c" containerName="csi-external-health-monitor-controller"
	Oct 01 23:01:41 addons-003557 kubelet[1628]: I1001 23:01:41.133135    1628 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8kf\" (UniqueName: \"kubernetes.io/projected/7eec5510-dee3-446b-a3c2-bd081b192280-kube-api-access-wp8kf\") pod \"hello-world-app-55bf9c44b4-m668q\" (UID: \"7eec5510-dee3-446b-a3c2-bd081b192280\") " pod="default/hello-world-app-55bf9c44b4-m668q"
	
	
	==> storage-provisioner [f56a3f3f14126cacab4cbd02cc73c99c7bcb1128d9431c0d38e1a34f2c686815] <==
	I1001 22:48:47.673165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 22:48:47.681511       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 22:48:47.681566       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 22:48:47.688349       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 22:48:47.688487       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-003557_0a79a27c-37bb-4dc1-86a9-f726cc75b962!
	I1001 22:48:47.688428       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bee9eb1d-e089-411a-bb70-9303bb4633b2", APIVersion:"v1", ResourceVersion:"912", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-003557_0a79a27c-37bb-4dc1-86a9-f726cc75b962 became leader
	I1001 22:48:47.789023       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-003557_0a79a27c-37bb-4dc1-86a9-f726cc75b962!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-003557 -n addons-003557
helpers_test.go:261: (dbg) Run:  kubectl --context addons-003557 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-m668q ingress-nginx-admission-create-xvl5n ingress-nginx-admission-patch-9cw6h
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-003557 describe pod hello-world-app-55bf9c44b4-m668q ingress-nginx-admission-create-xvl5n ingress-nginx-admission-patch-9cw6h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-003557 describe pod hello-world-app-55bf9c44b4-m668q ingress-nginx-admission-create-xvl5n ingress-nginx-admission-patch-9cw6h: exit status 1 (68.947802ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-m668q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-003557/192.168.49.2
	Start Time:       Tue, 01 Oct 2024 23:01:40 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wp8kf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wp8kf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-m668q to addons-003557
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.631s (1.631s including waiting). Image size: 4944818 bytes.
	  Normal  Created    0s    kubelet            Created container hello-world-app
	  Normal  Started    0s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xvl5n" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9cw6h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-003557 describe pod hello-world-app-55bf9c44b4-m668q ingress-nginx-admission-create-xvl5n ingress-nginx-admission-patch-9cw6h: exit status 1
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-003557 addons disable ingress-dns --alsologtostderr -v=1: (1.501301286s)
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable ingress --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-003557 addons disable ingress --alsologtostderr -v=1: (7.611662832s)
--- FAIL: TestAddons/parallel/Ingress (150.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (331.44s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.764193ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zjg7c" [f8da0c14-1d24-402d-bcbd-d93fe9f23cc3] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003361144s
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (65.976795ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 10m40.920414386s

                                                
                                                
** /stderr **
I1001 22:58:44.922430   16095 retry.go:31] will retry after 3.381715145s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (66.676148ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 10m44.369920757s

                                                
                                                
** /stderr **
I1001 22:58:48.371779   16095 retry.go:31] will retry after 3.195960906s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (62.638387ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 10m47.629542407s

                                                
                                                
** /stderr **
I1001 22:58:51.631355   16095 retry.go:31] will retry after 7.762794848s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (76.658008ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 10m55.469520653s

                                                
                                                
** /stderr **
I1001 22:58:59.471395   16095 retry.go:31] will retry after 7.16487416s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (64.757546ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 11m2.699388966s

                                                
                                                
** /stderr **
I1001 22:59:06.701302   16095 retry.go:31] will retry after 10.946036591s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (67.521492ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 11m13.714016875s

                                                
                                                
** /stderr **
I1001 22:59:17.716004   16095 retry.go:31] will retry after 27.214994305s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (64.134046ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 11m40.994771688s

                                                
                                                
** /stderr **
I1001 22:59:44.997038   16095 retry.go:31] will retry after 44.344598829s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (66.370852ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 12m25.406806668s

                                                
                                                
** /stderr **
I1001 23:00:29.409146   16095 retry.go:31] will retry after 1m9.519700111s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (62.367047ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 13m34.990101227s

                                                
                                                
** /stderr **
I1001 23:01:38.992366   16095 retry.go:31] will retry after 1m12.692739332s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (62.43657ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 14m47.74911544s

                                                
                                                
** /stderr **
I1001 23:02:51.751270   16095 retry.go:31] will retry after 1m16.036902027s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-003557 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-003557 top pods -n kube-system: exit status 1 (63.119492ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6cj4k, age: 16m3.849676825s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-003557
helpers_test.go:235: (dbg) docker inspect addons-003557:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff",
	        "Created": "2024-10-01T22:47:46.000812598Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18163,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-01T22:47:46.140159634Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1e9ad061035bd5b30872a757d87ebe8d5dc61829c56d176a3bb4ef156d71dbc8",
	        "ResolvConfPath": "/var/lib/docker/containers/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff/hostname",
	        "HostsPath": "/var/lib/docker/containers/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff/hosts",
	        "LogPath": "/var/lib/docker/containers/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff-json.log",
	        "Name": "/addons-003557",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-003557:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-003557",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fc1a80bd84cd48e1be9101afc74670c23d38996937a2b16194a3917e5b7da15c-init/diff:/var/lib/docker/overlay2/b9404fff46f8e735d2bf051ec5059d82dbc01f063c2a94263bbafaa62c37fadc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc1a80bd84cd48e1be9101afc74670c23d38996937a2b16194a3917e5b7da15c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc1a80bd84cd48e1be9101afc74670c23d38996937a2b16194a3917e5b7da15c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc1a80bd84cd48e1be9101afc74670c23d38996937a2b16194a3917e5b7da15c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-003557",
	                "Source": "/var/lib/docker/volumes/addons-003557/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-003557",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-003557",
	                "name.minikube.sigs.k8s.io": "addons-003557",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "307d7cdc38ea628e0dfcba85c98e607c34fa01f82b4bbeb52716621b4276720c",
	            "SandboxKey": "/var/run/docker/netns/307d7cdc38ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-003557": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b349c7b1c5228352689da8597b81b9e506a3bcef928ffcaf2f324cfe2c11add3",
	                    "EndpointID": "eb8ad19fb69207ef817348b3b7c0e210303b25e993f2a45501a50dcf7e4a2c23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-003557",
	                        "e707a4e961c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-003557 -n addons-003557
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-003557 logs -n 25: (1.114324051s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-848534 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | download-docker-848534                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-848534                                                                   | download-docker-848534 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-560533   | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | binary-mirror-560533                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36859                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-560533                                                                     | binary-mirror-560533   | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| addons  | enable dashboard -p                                                                         | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | addons-003557                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | addons-003557                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-003557 --wait=true                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:50 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:50 UTC | 01 Oct 24 22:50 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | -p addons-003557                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-003557 ip                                                                            | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:59 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | -p addons-003557                                                                            |                        |         |         |                     |                     |
	| addons  | addons-003557 addons                                                                        | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-003557 ssh cat                                                                       | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | /opt/local-path-provisioner/pvc-0e7d7921-5349-40a6-8079-5946d984cc77_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-003557 addons                                                                        | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-003557 addons                                                                        | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-003557 addons                                                                        | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC | 01 Oct 24 22:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-003557 ssh curl -s                                                                   | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 22:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-003557 ip                                                                            | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 23:01 UTC | 01 Oct 24 23:01 UTC |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 23:01 UTC | 01 Oct 24 23:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-003557 addons disable                                                                | addons-003557          | jenkins | v1.34.0 | 01 Oct 24 23:01 UTC | 01 Oct 24 23:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 22:47:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 22:47:21.965658   17406 out.go:345] Setting OutFile to fd 1 ...
	I1001 22:47:21.965789   17406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:21.965798   17406 out.go:358] Setting ErrFile to fd 2...
	I1001 22:47:21.965804   17406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:21.965996   17406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	I1001 22:47:21.966616   17406 out.go:352] Setting JSON to false
	I1001 22:47:21.967442   17406 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1789,"bootTime":1727821053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 22:47:21.967541   17406 start.go:139] virtualization: kvm guest
	I1001 22:47:21.969853   17406 out.go:177] * [addons-003557] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 22:47:21.971350   17406 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 22:47:21.971352   17406 notify.go:220] Checking for updates...
	I1001 22:47:21.973883   17406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 22:47:21.975207   17406 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	I1001 22:47:21.976372   17406 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	I1001 22:47:21.977578   17406 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 22:47:21.978933   17406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 22:47:21.980553   17406 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 22:47:22.002383   17406 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 22:47:22.002490   17406 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 22:47:22.046510   17406 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-01 22:47:22.03757525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 22:47:22.046629   17406 docker.go:318] overlay module found
	I1001 22:47:22.049336   17406 out.go:177] * Using the docker driver based on user configuration
	I1001 22:47:22.050531   17406 start.go:297] selected driver: docker
	I1001 22:47:22.050551   17406 start.go:901] validating driver "docker" against <nil>
	I1001 22:47:22.050566   17406 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 22:47:22.051343   17406 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 22:47:22.096445   17406 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-01 22:47:22.086770495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 22:47:22.096619   17406 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 22:47:22.096879   17406 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 22:47:22.098952   17406 out.go:177] * Using Docker driver with root privileges
	I1001 22:47:22.100092   17406 cni.go:84] Creating CNI manager for ""
	I1001 22:47:22.100164   17406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 22:47:22.100180   17406 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 22:47:22.100269   17406 start.go:340] cluster config:
	{Name:addons-003557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-003557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:22.101480   17406 out.go:177] * Starting "addons-003557" primary control-plane node in "addons-003557" cluster
	I1001 22:47:22.102565   17406 cache.go:121] Beginning downloading kic base image for docker with crio
	I1001 22:47:22.103583   17406 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1001 22:47:22.104776   17406 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:22.104803   17406 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 22:47:22.104812   17406 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:22.104820   17406 cache.go:56] Caching tarball of preloaded images
	I1001 22:47:22.104891   17406 preload.go:172] Found /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 22:47:22.104902   17406 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 22:47:22.105207   17406 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/config.json ...
	I1001 22:47:22.105229   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/config.json: {Name:mk130bfc3a5e480d2dbe9dd1c51226ea03a7c34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:22.121824   17406 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 22:47:22.121946   17406 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 22:47:22.121965   17406 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1001 22:47:22.121972   17406 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1001 22:47:22.121984   17406 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1001 22:47:22.121992   17406 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1001 22:47:33.736785   17406 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1001 22:47:33.736832   17406 cache.go:194] Successfully downloaded all kic artifacts
	I1001 22:47:33.736876   17406 start.go:360] acquireMachinesLock for addons-003557: {Name:mkb213c143cb031a9d9505d7f03929c80936d14e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 22:47:33.736984   17406 start.go:364] duration metric: took 87.033µs to acquireMachinesLock for "addons-003557"
	I1001 22:47:33.737012   17406 start.go:93] Provisioning new machine with config: &{Name:addons-003557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-003557 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 22:47:33.737120   17406 start.go:125] createHost starting for "" (driver="docker")
	I1001 22:47:33.739047   17406 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1001 22:47:33.739300   17406 start.go:159] libmachine.API.Create for "addons-003557" (driver="docker")
	I1001 22:47:33.739341   17406 client.go:168] LocalClient.Create starting
	I1001 22:47:33.739444   17406 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem
	I1001 22:47:33.893440   17406 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/cert.pem
	I1001 22:47:34.236814   17406 cli_runner.go:164] Run: docker network inspect addons-003557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1001 22:47:34.251923   17406 cli_runner.go:211] docker network inspect addons-003557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1001 22:47:34.252014   17406 network_create.go:284] running [docker network inspect addons-003557] to gather additional debugging logs...
	I1001 22:47:34.252041   17406 cli_runner.go:164] Run: docker network inspect addons-003557
	W1001 22:47:34.268031   17406 cli_runner.go:211] docker network inspect addons-003557 returned with exit code 1
	I1001 22:47:34.268060   17406 network_create.go:287] error running [docker network inspect addons-003557]: docker network inspect addons-003557: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-003557 not found
	I1001 22:47:34.268070   17406 network_create.go:289] output of [docker network inspect addons-003557]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-003557 not found
	
	** /stderr **
	I1001 22:47:34.268155   17406 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 22:47:34.283445   17406 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001970770}
	I1001 22:47:34.283488   17406 network_create.go:124] attempt to create docker network addons-003557 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1001 22:47:34.283530   17406 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-003557 addons-003557
	I1001 22:47:34.343380   17406 network_create.go:108] docker network addons-003557 192.168.49.0/24 created
	I1001 22:47:34.343410   17406 kic.go:121] calculated static IP "192.168.49.2" for the "addons-003557" container
	I1001 22:47:34.343480   17406 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1001 22:47:34.358525   17406 cli_runner.go:164] Run: docker volume create addons-003557 --label name.minikube.sigs.k8s.io=addons-003557 --label created_by.minikube.sigs.k8s.io=true
	I1001 22:47:34.376414   17406 oci.go:103] Successfully created a docker volume addons-003557
	I1001 22:47:34.376483   17406 cli_runner.go:164] Run: docker run --rm --name addons-003557-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-003557 --entrypoint /usr/bin/test -v addons-003557:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1001 22:47:41.574653   17406 cli_runner.go:217] Completed: docker run --rm --name addons-003557-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-003557 --entrypoint /usr/bin/test -v addons-003557:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (7.198132962s)
	I1001 22:47:41.574679   17406 oci.go:107] Successfully prepared a docker volume addons-003557
	I1001 22:47:41.574693   17406 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:41.574709   17406 kic.go:194] Starting extracting preloaded images to volume ...
	I1001 22:47:41.574751   17406 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-003557:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1001 22:47:45.939322   17406 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-003557:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.364532873s)
	I1001 22:47:45.939351   17406 kic.go:203] duration metric: took 4.364639329s to extract preloaded images to volume ...
	W1001 22:47:45.939467   17406 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1001 22:47:45.939559   17406 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1001 22:47:45.984733   17406 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-003557 --name addons-003557 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-003557 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-003557 --network addons-003557 --ip 192.168.49.2 --volume addons-003557:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1001 22:47:46.294775   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Running}}
	I1001 22:47:46.313253   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:47:46.331610   17406 cli_runner.go:164] Run: docker exec addons-003557 stat /var/lib/dpkg/alternatives/iptables
	I1001 22:47:46.374431   17406 oci.go:144] the created container "addons-003557" has a running status.
	I1001 22:47:46.374458   17406 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa...
	I1001 22:47:46.595347   17406 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1001 22:47:46.617714   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:47:46.637480   17406 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1001 22:47:46.637505   17406 kic_runner.go:114] Args: [docker exec --privileged addons-003557 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1001 22:47:46.742213   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:47:46.761331   17406 machine.go:93] provisionDockerMachine start ...
	I1001 22:47:46.761403   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:46.781751   17406 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.781937   17406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1001 22:47:46.781948   17406 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 22:47:46.919607   17406 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-003557
	
	I1001 22:47:46.919636   17406 ubuntu.go:169] provisioning hostname "addons-003557"
	I1001 22:47:46.919694   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:46.937167   17406 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.937371   17406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1001 22:47:46.937387   17406 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-003557 && echo "addons-003557" | sudo tee /etc/hostname
	I1001 22:47:47.075264   17406 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-003557
	
	I1001 22:47:47.075379   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.092511   17406 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:47.092712   17406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1001 22:47:47.092730   17406 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-003557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-003557/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-003557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 22:47:47.216477   17406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 22:47:47.216508   17406 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9314/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9314/.minikube}
	I1001 22:47:47.216562   17406 ubuntu.go:177] setting up certificates
	I1001 22:47:47.216574   17406 provision.go:84] configureAuth start
	I1001 22:47:47.216669   17406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-003557
	I1001 22:47:47.233955   17406 provision.go:143] copyHostCerts
	I1001 22:47:47.234027   17406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9314/.minikube/ca.pem (1078 bytes)
	I1001 22:47:47.234135   17406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9314/.minikube/cert.pem (1123 bytes)
	I1001 22:47:47.234193   17406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9314/.minikube/key.pem (1675 bytes)
	I1001 22:47:47.235066   17406 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca-key.pem org=jenkins.addons-003557 san=[127.0.0.1 192.168.49.2 addons-003557 localhost minikube]
	I1001 22:47:47.320983   17406 provision.go:177] copyRemoteCerts
	I1001 22:47:47.321036   17406 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 22:47:47.321067   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.337815   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:47:47.428975   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 22:47:47.449509   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 22:47:47.469912   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 22:47:47.490262   17406 provision.go:87] duration metric: took 273.670781ms to configureAuth
	I1001 22:47:47.490292   17406 ubuntu.go:193] setting minikube options for container-runtime
	I1001 22:47:47.490483   17406 config.go:182] Loaded profile config "addons-003557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:47:47.490582   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.509165   17406 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:47.509353   17406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1001 22:47:47.509371   17406 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 22:47:47.720771   17406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 22:47:47.720797   17406 machine.go:96] duration metric: took 959.446625ms to provisionDockerMachine
	I1001 22:47:47.720807   17406 client.go:171] duration metric: took 13.981456631s to LocalClient.Create
	I1001 22:47:47.720822   17406 start.go:167] duration metric: took 13.98152478s to libmachine.API.Create "addons-003557"
	I1001 22:47:47.720830   17406 start.go:293] postStartSetup for "addons-003557" (driver="docker")
	I1001 22:47:47.720839   17406 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 22:47:47.720888   17406 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 22:47:47.720921   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.736824   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:47:47.829383   17406 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 22:47:47.832326   17406 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1001 22:47:47.832367   17406 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1001 22:47:47.832379   17406 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1001 22:47:47.832387   17406 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1001 22:47:47.832403   17406 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9314/.minikube/addons for local assets ...
	I1001 22:47:47.832473   17406 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9314/.minikube/files for local assets ...
	I1001 22:47:47.832507   17406 start.go:296] duration metric: took 111.669492ms for postStartSetup
	I1001 22:47:47.832851   17406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-003557
	I1001 22:47:47.848800   17406 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/config.json ...
	I1001 22:47:47.849037   17406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 22:47:47.849084   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.867196   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:47:47.961081   17406 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1001 22:47:47.964867   17406 start.go:128] duration metric: took 14.227732466s to createHost
	I1001 22:47:47.964894   17406 start.go:83] releasing machines lock for "addons-003557", held for 14.227898395s
	I1001 22:47:47.964961   17406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-003557
	I1001 22:47:47.981226   17406 ssh_runner.go:195] Run: cat /version.json
	I1001 22:47:47.981289   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.981351   17406 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 22:47:47.981415   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:47:47.999689   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:47:48.000381   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:47:48.167809   17406 ssh_runner.go:195] Run: systemctl --version
	I1001 22:47:48.171824   17406 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 22:47:48.307475   17406 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 22:47:48.311664   17406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 22:47:48.329096   17406 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1001 22:47:48.329173   17406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 22:47:48.355787   17406 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1001 22:47:48.355816   17406 start.go:495] detecting cgroup driver to use...
	I1001 22:47:48.355849   17406 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1001 22:47:48.355896   17406 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 22:47:48.369720   17406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 22:47:48.379814   17406 docker.go:217] disabling cri-docker service (if available) ...
	I1001 22:47:48.379866   17406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 22:47:48.392241   17406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 22:47:48.405050   17406 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 22:47:48.477750   17406 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 22:47:48.557758   17406 docker.go:233] disabling docker service ...
	I1001 22:47:48.557825   17406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 22:47:48.574378   17406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 22:47:48.584680   17406 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 22:47:48.656864   17406 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 22:47:48.734388   17406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 22:47:48.744547   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 22:47:48.758939   17406 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 22:47:48.758999   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.767902   17406 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 22:47:48.767969   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.777165   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.785934   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.794575   17406 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 22:47:48.802623   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.811134   17406 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.824799   17406 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.833425   17406 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 22:47:48.840567   17406 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 22:47:48.840616   17406 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 22:47:48.853034   17406 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 22:47:48.860101   17406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:47:48.932938   17406 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 22:47:49.013101   17406 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 22:47:49.013171   17406 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 22:47:49.016248   17406 start.go:563] Will wait 60s for crictl version
	I1001 22:47:49.016296   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:47:49.019138   17406 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 22:47:49.050775   17406 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1001 22:47:49.050869   17406 ssh_runner.go:195] Run: crio --version
	I1001 22:47:49.085494   17406 ssh_runner.go:195] Run: crio --version
	I1001 22:47:49.119717   17406 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1001 22:47:49.121389   17406 cli_runner.go:164] Run: docker network inspect addons-003557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 22:47:49.137600   17406 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1001 22:47:49.140940   17406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 22:47:49.150714   17406 kubeadm.go:883] updating cluster {Name:addons-003557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-003557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 22:47:49.150811   17406 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:49.150849   17406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 22:47:49.209981   17406 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 22:47:49.210001   17406 crio.go:433] Images already preloaded, skipping extraction
	I1001 22:47:49.210038   17406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 22:47:49.241730   17406 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 22:47:49.241755   17406 cache_images.go:84] Images are preloaded, skipping loading
	I1001 22:47:49.241764   17406 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1001 22:47:49.241870   17406 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-003557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-003557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 22:47:49.241948   17406 ssh_runner.go:195] Run: crio config
	I1001 22:47:49.281545   17406 cni.go:84] Creating CNI manager for ""
	I1001 22:47:49.281566   17406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 22:47:49.281578   17406 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 22:47:49.281604   17406 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-003557 NodeName:addons-003557 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 22:47:49.281753   17406 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-003557"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 22:47:49.281822   17406 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 22:47:49.289969   17406 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 22:47:49.290024   17406 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 22:47:49.297824   17406 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1001 22:47:49.313832   17406 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 22:47:49.329934   17406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1001 22:47:49.345639   17406 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1001 22:47:49.348638   17406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 22:47:49.358552   17406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:47:49.433557   17406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 22:47:49.445080   17406 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557 for IP: 192.168.49.2
	I1001 22:47:49.445103   17406 certs.go:194] generating shared ca certs ...
	I1001 22:47:49.445119   17406 certs.go:226] acquiring lock for ca certs: {Name:mk7cb0f487f2a8d9c123ba652fec1471e60d3b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.445253   17406 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9314/.minikube/ca.key
	I1001 22:47:49.585758   17406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt ...
	I1001 22:47:49.585786   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt: {Name:mkc5719ab44495abc481f23183d7d9e421125e39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.585957   17406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9314/.minikube/ca.key ...
	I1001 22:47:49.585969   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/ca.key: {Name:mk23f3406b4f0ad789667e5d9fb6a7603bbf1ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.586037   17406 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.key
	I1001 22:47:49.658550   17406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.crt ...
	I1001 22:47:49.658574   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.crt: {Name:mk9cce3bf71d8e0978167d49f7c6f8e831fdefa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.658718   17406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.key ...
	I1001 22:47:49.658731   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.key: {Name:mkef79102ab9280df6f1a7a404a4398633d758f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.658798   17406 certs.go:256] generating profile certs ...
	I1001 22:47:49.658851   17406 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.key
	I1001 22:47:49.658861   17406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt with IP's: []
	I1001 22:47:49.793769   17406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt ...
	I1001 22:47:49.793796   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: {Name:mkeff56f49cc833d579e831a27db3aefd104c038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.793946   17406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.key ...
	I1001 22:47:49.793956   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.key: {Name:mk79d47f08c301fdf38bba01e9948b8a19b92e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.794022   17406 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key.3b7c86b9
	I1001 22:47:49.794039   17406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt.3b7c86b9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1001 22:47:49.944270   17406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt.3b7c86b9 ...
	I1001 22:47:49.944302   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt.3b7c86b9: {Name:mkc1f3e3c50dc41c25258fc4b110b78449125159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.944459   17406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key.3b7c86b9 ...
	I1001 22:47:49.944471   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key.3b7c86b9: {Name:mk8074a7d76731f3de2b18828eb73edee99f98ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:49.944539   17406 certs.go:381] copying /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt.3b7c86b9 -> /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt
	I1001 22:47:49.944609   17406 certs.go:385] copying /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key.3b7c86b9 -> /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key
	I1001 22:47:49.944675   17406 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.key
	I1001 22:47:49.944692   17406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.crt with IP's: []
	I1001 22:47:50.002575   17406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.crt ...
	I1001 22:47:50.002604   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.crt: {Name:mk7aa7796444cb7db480754240edadf71c6ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:50.002756   17406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.key ...
	I1001 22:47:50.002766   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.key: {Name:mkd5e8e1c201fcd0d55d1450e1ebf7eacb34e8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:50.002932   17406 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca-key.pem (1675 bytes)
	I1001 22:47:50.002966   17406 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/ca.pem (1078 bytes)
	I1001 22:47:50.002986   17406 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/cert.pem (1123 bytes)
	I1001 22:47:50.003006   17406 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9314/.minikube/certs/key.pem (1675 bytes)
	I1001 22:47:50.003567   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 22:47:50.025057   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1001 22:47:50.045942   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 22:47:50.067774   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 22:47:50.089684   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 22:47:50.110309   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 22:47:50.130604   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 22:47:50.151444   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 22:47:50.171949   17406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 22:47:50.192683   17406 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 22:47:50.208856   17406 ssh_runner.go:195] Run: openssl version
	I1001 22:47:50.213819   17406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 22:47:50.221978   17406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:50.225107   17406 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:50.225163   17406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:50.230970   17406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 22:47:50.238894   17406 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 22:47:50.241855   17406 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 22:47:50.241895   17406 kubeadm.go:392] StartCluster: {Name:addons-003557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-003557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:50.242009   17406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 22:47:50.242051   17406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 22:47:50.279852   17406 cri.go:89] found id: ""
	I1001 22:47:50.279904   17406 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 22:47:50.288569   17406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 22:47:50.296553   17406 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1001 22:47:50.296603   17406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 22:47:50.304391   17406 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 22:47:50.304416   17406 kubeadm.go:157] found existing configuration files:
	
	I1001 22:47:50.304455   17406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 22:47:50.312308   17406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 22:47:50.312367   17406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 22:47:50.319871   17406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 22:47:50.327652   17406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 22:47:50.327702   17406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 22:47:50.335577   17406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 22:47:50.343158   17406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 22:47:50.343214   17406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 22:47:50.350509   17406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 22:47:50.357900   17406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 22:47:50.357958   17406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 22:47:50.365220   17406 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1001 22:47:50.396658   17406 kubeadm.go:310] W1001 22:47:50.395948    1294 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 22:47:50.397133   17406 kubeadm.go:310] W1001 22:47:50.396605    1294 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 22:47:50.414851   17406 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I1001 22:47:50.464349   17406 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 22:47:59.347626   17406 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 22:47:59.347682   17406 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 22:47:59.347753   17406 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1001 22:47:59.347804   17406 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I1001 22:47:59.347835   17406 kubeadm.go:310] OS: Linux
	I1001 22:47:59.347884   17406 kubeadm.go:310] CGROUPS_CPU: enabled
	I1001 22:47:59.347927   17406 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1001 22:47:59.347977   17406 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1001 22:47:59.348019   17406 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1001 22:47:59.348098   17406 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1001 22:47:59.348182   17406 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1001 22:47:59.348256   17406 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1001 22:47:59.348325   17406 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1001 22:47:59.348398   17406 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1001 22:47:59.348486   17406 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 22:47:59.348625   17406 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 22:47:59.348780   17406 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 22:47:59.348866   17406 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 22:47:59.351490   17406 out.go:235]   - Generating certificates and keys ...
	I1001 22:47:59.351593   17406 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 22:47:59.351703   17406 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 22:47:59.351807   17406 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 22:47:59.351900   17406 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 22:47:59.351994   17406 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 22:47:59.352074   17406 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 22:47:59.352146   17406 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 22:47:59.352295   17406 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-003557 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 22:47:59.352367   17406 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 22:47:59.352516   17406 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-003557 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 22:47:59.352591   17406 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 22:47:59.352710   17406 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 22:47:59.352802   17406 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 22:47:59.352889   17406 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 22:47:59.352963   17406 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 22:47:59.353047   17406 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 22:47:59.353105   17406 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 22:47:59.353163   17406 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 22:47:59.353227   17406 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 22:47:59.353315   17406 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 22:47:59.353406   17406 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 22:47:59.354888   17406 out.go:235]   - Booting up control plane ...
	I1001 22:47:59.354976   17406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 22:47:59.355049   17406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 22:47:59.355106   17406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 22:47:59.355203   17406 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 22:47:59.355299   17406 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 22:47:59.355339   17406 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 22:47:59.355448   17406 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 22:47:59.355555   17406 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 22:47:59.355605   17406 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.595998ms
	I1001 22:47:59.355669   17406 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 22:47:59.355745   17406 kubeadm.go:310] [api-check] The API server is healthy after 4.501005191s
	I1001 22:47:59.355854   17406 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 22:47:59.356014   17406 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 22:47:59.356075   17406 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 22:47:59.356319   17406 kubeadm.go:310] [mark-control-plane] Marking the node addons-003557 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 22:47:59.356399   17406 kubeadm.go:310] [bootstrap-token] Using token: cmw5x7.tpys45wndsft8y8j
	I1001 22:47:59.358932   17406 out.go:235]   - Configuring RBAC rules ...
	I1001 22:47:59.359033   17406 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 22:47:59.359105   17406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 22:47:59.359243   17406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 22:47:59.359401   17406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 22:47:59.359501   17406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 22:47:59.359575   17406 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 22:47:59.359680   17406 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 22:47:59.359720   17406 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 22:47:59.359764   17406 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 22:47:59.359770   17406 kubeadm.go:310] 
	I1001 22:47:59.359821   17406 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 22:47:59.359827   17406 kubeadm.go:310] 
	I1001 22:47:59.359905   17406 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 22:47:59.359911   17406 kubeadm.go:310] 
	I1001 22:47:59.359932   17406 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 22:47:59.359985   17406 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 22:47:59.360033   17406 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 22:47:59.360042   17406 kubeadm.go:310] 
	I1001 22:47:59.360096   17406 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 22:47:59.360104   17406 kubeadm.go:310] 
	I1001 22:47:59.360151   17406 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 22:47:59.360161   17406 kubeadm.go:310] 
	I1001 22:47:59.360227   17406 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 22:47:59.360301   17406 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 22:47:59.360380   17406 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 22:47:59.360392   17406 kubeadm.go:310] 
	I1001 22:47:59.360518   17406 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 22:47:59.360636   17406 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 22:47:59.360660   17406 kubeadm.go:310] 
	I1001 22:47:59.360797   17406 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cmw5x7.tpys45wndsft8y8j \
	I1001 22:47:59.360926   17406 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:015f6e369eec431c1e019d1e80b3ccdc24b9f33f6d33017d1e56e33b1290b97b \
	I1001 22:47:59.360958   17406 kubeadm.go:310] 	--control-plane 
	I1001 22:47:59.360968   17406 kubeadm.go:310] 
	I1001 22:47:59.361088   17406 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 22:47:59.361096   17406 kubeadm.go:310] 
	I1001 22:47:59.361179   17406 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cmw5x7.tpys45wndsft8y8j \
	I1001 22:47:59.361299   17406 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:015f6e369eec431c1e019d1e80b3ccdc24b9f33f6d33017d1e56e33b1290b97b 
	I1001 22:47:59.361310   17406 cni.go:84] Creating CNI manager for ""
	I1001 22:47:59.361316   17406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 22:47:59.362823   17406 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 22:47:59.364160   17406 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 22:47:59.367784   17406 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 22:47:59.367802   17406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 22:47:59.385208   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 22:47:59.578317   17406 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 22:47:59.578397   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:47:59.578450   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-003557 minikube.k8s.io/updated_at=2024_10_01T22_47_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=addons-003557 minikube.k8s.io/primary=true
	I1001 22:47:59.584962   17406 ops.go:34] apiserver oom_adj: -16
	I1001 22:47:59.656946   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:00.157815   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:00.657621   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:01.157633   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:01.657846   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:02.156964   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:02.657855   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:03.157706   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:03.657568   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:04.157889   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:04.657850   17406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:04.721346   17406 kubeadm.go:1113] duration metric: took 5.142994627s to wait for elevateKubeSystemPrivileges
	I1001 22:48:04.721392   17406 kubeadm.go:394] duration metric: took 14.479499895s to StartCluster
	I1001 22:48:04.721417   17406 settings.go:142] acquiring lock: {Name:mk0d6ca98bed6b4aaaa1127bd072eb7aeabfdcd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:48:04.721541   17406 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9314/kubeconfig
	I1001 22:48:04.722040   17406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/kubeconfig: {Name:mk09b2ba83f78d625a17bbeb72a5433822606f82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:48:04.722283   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 22:48:04.722303   17406 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 22:48:04.722382   17406 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 22:48:04.722492   17406 addons.go:69] Setting yakd=true in profile "addons-003557"
	I1001 22:48:04.722529   17406 addons.go:234] Setting addon yakd=true in "addons-003557"
	I1001 22:48:04.722534   17406 config.go:182] Loaded profile config "addons-003557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:48:04.722562   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.722577   17406 addons.go:69] Setting inspektor-gadget=true in profile "addons-003557"
	I1001 22:48:04.722588   17406 addons.go:234] Setting addon inspektor-gadget=true in "addons-003557"
	I1001 22:48:04.722606   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.722792   17406 addons.go:69] Setting storage-provisioner=true in profile "addons-003557"
	I1001 22:48:04.722813   17406 addons.go:234] Setting addon storage-provisioner=true in "addons-003557"
	I1001 22:48:04.722846   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.723093   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723134   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723289   17406 addons.go:69] Setting volcano=true in profile "addons-003557"
	I1001 22:48:04.723321   17406 addons.go:234] Setting addon volcano=true in "addons-003557"
	I1001 22:48:04.723319   17406 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-003557"
	I1001 22:48:04.723362   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.723362   17406 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-003557"
	I1001 22:48:04.723371   17406 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-003557"
	I1001 22:48:04.723372   17406 addons.go:69] Setting metrics-server=true in profile "addons-003557"
	I1001 22:48:04.723402   17406 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-003557"
	I1001 22:48:04.723405   17406 addons.go:234] Setting addon metrics-server=true in "addons-003557"
	I1001 22:48:04.723427   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.723450   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.723655   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723841   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723850   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723898   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.723984   17406 addons.go:69] Setting default-storageclass=true in profile "addons-003557"
	I1001 22:48:04.724027   17406 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-003557"
	I1001 22:48:04.724048   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.724317   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.724335   17406 addons.go:69] Setting volumesnapshots=true in profile "addons-003557"
	I1001 22:48:04.724351   17406 addons.go:234] Setting addon volumesnapshots=true in "addons-003557"
	I1001 22:48:04.725675   17406 out.go:177] * Verifying Kubernetes components...
	I1001 22:48:04.725814   17406 addons.go:69] Setting ingress=true in profile "addons-003557"
	I1001 22:48:04.725822   17406 addons.go:69] Setting ingress-dns=true in profile "addons-003557"
	I1001 22:48:04.725842   17406 addons.go:234] Setting addon ingress-dns=true in "addons-003557"
	I1001 22:48:04.725885   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.725932   17406 addons.go:234] Setting addon ingress=true in "addons-003557"
	I1001 22:48:04.725706   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.725976   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.726056   17406 addons.go:69] Setting cloud-spanner=true in profile "addons-003557"
	I1001 22:48:04.726094   17406 addons.go:234] Setting addon cloud-spanner=true in "addons-003557"
	I1001 22:48:04.726176   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.726610   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.727259   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.727275   17406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:48:04.727548   17406 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-003557"
	I1001 22:48:04.727631   17406 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-003557"
	I1001 22:48:04.727667   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.727927   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.728228   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.728461   17406 addons.go:69] Setting gcp-auth=true in profile "addons-003557"
	I1001 22:48:04.728502   17406 mustload.go:65] Loading cluster: addons-003557
	I1001 22:48:04.729831   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.724320   17406 addons.go:69] Setting registry=true in profile "addons-003557"
	I1001 22:48:04.731909   17406 addons.go:234] Setting addon registry=true in "addons-003557"
	I1001 22:48:04.731976   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.761349   17406 config.go:182] Loaded profile config "addons-003557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:48:04.761709   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.773679   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	W1001 22:48:04.774632   17406 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1001 22:48:04.776439   17406 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1001 22:48:04.780935   17406 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1001 22:48:04.782822   17406 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1001 22:48:04.782844   17406 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1001 22:48:04.782949   17406 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 22:48:04.782960   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.783297   17406 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 22:48:04.783310   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 22:48:04.783362   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.783379   17406 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1001 22:48:04.785676   17406 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 22:48:04.785698   17406 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 22:48:04.785762   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.786025   17406 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 22:48:04.786036   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1001 22:48:04.786073   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.803474   17406 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1001 22:48:04.809127   17406 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:04.809981   17406 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 22:48:04.810009   17406 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 22:48:04.810090   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.815601   17406 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1001 22:48:04.818138   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.836503   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.838435   17406 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-003557"
	I1001 22:48:04.838479   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.834687   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.838890   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.838991   17406 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:04.839037   17406 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 22:48:04.839062   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 22:48:04.841082   17406 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 22:48:04.841100   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 22:48:04.841147   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.841459   17406 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 22:48:04.841472   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 22:48:04.841515   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.841989   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 22:48:04.842004   17406 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 22:48:04.842050   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.840584   17406 addons.go:234] Setting addon default-storageclass=true in "addons-003557"
	I1001 22:48:04.842644   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:04.843130   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:04.848289   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.860375   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.862128   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 22:48:04.863039   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.864963   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 22:48:04.867128   17406 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1001 22:48:04.868190   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 22:48:04.868695   17406 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1001 22:48:04.870267   17406 out.go:177]   - Using image docker.io/registry:2.8.3
	I1001 22:48:04.870388   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 22:48:04.871849   17406 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 22:48:04.871869   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 22:48:04.871928   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.873552   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 22:48:04.874769   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 22:48:04.875803   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 22:48:04.876787   17406 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1001 22:48:04.876808   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 22:48:04.876871   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.877877   17406 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 22:48:04.879027   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 22:48:04.879046   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 22:48:04.879108   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.896988   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.897772   17406 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 22:48:04.898037   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.899331   17406 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 22:48:04.899346   17406 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 22:48:04.899393   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.899943   17406 out.go:177]   - Using image docker.io/busybox:stable
	I1001 22:48:04.901330   17406 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 22:48:04.901350   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 22:48:04.901396   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:04.903647   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.906366   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.906901   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.911371   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.919919   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:04.923190   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	W1001 22:48:04.937748   17406 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1001 22:48:04.937784   17406 retry.go:31] will retry after 245.553478ms: ssh: handshake failed: EOF
	I1001 22:48:05.044460   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 22:48:05.149504   17406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 22:48:05.244763   17406 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 22:48:05.244790   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 22:48:05.247441   17406 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 22:48:05.247467   17406 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 22:48:05.255151   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 22:48:05.350397   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 22:48:05.353981   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 22:48:05.356688   17406 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 22:48:05.356772   17406 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 22:48:05.434081   17406 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1001 22:48:05.434111   17406 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1001 22:48:05.440774   17406 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 22:48:05.440799   17406 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 22:48:05.442127   17406 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 22:48:05.442153   17406 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 22:48:05.442498   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 22:48:05.447581   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 22:48:05.448702   17406 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 22:48:05.448767   17406 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 22:48:05.537858   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 22:48:05.634188   17406 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 22:48:05.634282   17406 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 22:48:05.634658   17406 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 22:48:05.634714   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 22:48:05.656103   17406 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 22:48:05.656196   17406 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 22:48:05.734394   17406 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 22:48:05.734486   17406 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 22:48:05.738382   17406 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1001 22:48:05.738465   17406 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1001 22:48:05.750681   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 22:48:05.834178   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 22:48:05.834260   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 22:48:05.853449   17406 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 22:48:05.853557   17406 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 22:48:05.854544   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 22:48:05.937457   17406 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1001 22:48:05.937544   17406 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1001 22:48:05.955138   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 22:48:06.040282   17406 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 22:48:06.040361   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 22:48:06.055459   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 22:48:06.055512   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 22:48:06.233415   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 22:48:06.334921   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 22:48:06.335013   17406 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 22:48:06.446553   17406 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1001 22:48:06.446583   17406 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1001 22:48:06.448065   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 22:48:06.448120   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 22:48:06.648982   17406 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:06.649091   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 22:48:06.744867   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 22:48:06.744964   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 22:48:06.839947   17406 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1001 22:48:06.840038   17406 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1001 22:48:06.847989   17406 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.803435883s)
	I1001 22:48:06.848109   17406 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1001 22:48:06.848297   17406 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.698707251s)
	I1001 22:48:06.850634   17406 node_ready.go:35] waiting up to 6m0s for node "addons-003557" to be "Ready" ...
	I1001 22:48:06.936095   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.680884918s)
	I1001 22:48:06.952185   17406 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 22:48:06.952290   17406 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 22:48:07.046029   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:07.152152   17406 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1001 22:48:07.152183   17406 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1001 22:48:07.253851   17406 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 22:48:07.253930   17406 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1001 22:48:07.546186   17406 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-003557" context rescaled to 1 replicas
	I1001 22:48:07.643139   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 22:48:07.643168   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 22:48:07.736064   17406 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 22:48:07.736092   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1001 22:48:07.843017   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 22:48:07.843045   17406 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 22:48:07.945987   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 22:48:08.149556   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 22:48:08.149657   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 22:48:08.347596   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 22:48:08.347625   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 22:48:08.453419   17406 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 22:48:08.453461   17406 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 22:48:08.551170   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 22:48:08.934245   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:09.156296   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.805793746s)
	I1001 22:48:09.156443   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.80237087s)
	I1001 22:48:09.156545   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.714018563s)
	I1001 22:48:10.262196   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.814578716s)
	I1001 22:48:10.262228   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.724325869s)
	I1001 22:48:10.262241   17406 addons.go:475] Verifying addon ingress=true in "addons-003557"
	I1001 22:48:10.262295   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.511532565s)
	I1001 22:48:10.262344   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.407731054s)
	I1001 22:48:10.262424   17406 addons.go:475] Verifying addon registry=true in "addons-003557"
	I1001 22:48:10.262449   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.307281624s)
	I1001 22:48:10.262524   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.029014048s)
	I1001 22:48:10.262471   17406 addons.go:475] Verifying addon metrics-server=true in "addons-003557"
	I1001 22:48:10.263932   17406 out.go:177] * Verifying ingress addon...
	I1001 22:48:10.264861   17406 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-003557 service yakd-dashboard -n yakd-dashboard
	
	I1001 22:48:10.264948   17406 out.go:177] * Verifying registry addon...
	I1001 22:48:10.266635   17406 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 22:48:10.267328   17406 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 22:48:10.271443   17406 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 22:48:10.271467   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:10.271684   17406 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 22:48:10.271705   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:10.839083   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:10.840074   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:10.969733   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.923654926s)
	W1001 22:48:10.969775   17406 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 22:48:10.969799   17406 retry.go:31] will retry after 348.931901ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 22:48:10.969848   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.023813837s)
	I1001 22:48:11.273518   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:11.275365   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:11.300306   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.749082805s)
	I1001 22:48:11.300338   17406 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-003557"
	I1001 22:48:11.301903   17406 out.go:177] * Verifying csi-hostpath-driver addon...
	I1001 22:48:11.304072   17406 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 22:48:11.319594   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:11.336495   17406 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 22:48:11.336516   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:11.356466   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:11.770754   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:11.771281   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:11.807174   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:12.044346   17406 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 22:48:12.044431   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:12.064044   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:12.335702   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:12.336336   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:12.336575   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:12.455616   17406 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 22:48:12.556372   17406 addons.go:234] Setting addon gcp-auth=true in "addons-003557"
	I1001 22:48:12.556451   17406 host.go:66] Checking if "addons-003557" exists ...
	I1001 22:48:12.556977   17406 cli_runner.go:164] Run: docker container inspect addons-003557 --format={{.State.Status}}
	I1001 22:48:12.575077   17406 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 22:48:12.575122   17406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-003557
	I1001 22:48:12.591755   17406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/addons-003557/id_rsa Username:docker}
	I1001 22:48:12.770978   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:12.771705   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:12.806993   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:13.270247   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:13.270626   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:13.307899   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:13.771156   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:13.771818   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:13.834047   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:13.853263   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:14.077275   17406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.757629461s)
	I1001 22:48:14.077300   17406 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.502192166s)
	I1001 22:48:14.079522   17406 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:14.081106   17406 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1001 22:48:14.082569   17406 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 22:48:14.082599   17406 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 22:48:14.145508   17406 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 22:48:14.145533   17406 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 22:48:14.163241   17406 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 22:48:14.163268   17406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 22:48:14.180620   17406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 22:48:14.271227   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:14.272236   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:14.334806   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:14.773664   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:14.835304   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:14.836087   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:14.849597   17406 addons.go:475] Verifying addon gcp-auth=true in "addons-003557"
	I1001 22:48:14.851291   17406 out.go:177] * Verifying gcp-auth addon...
	I1001 22:48:14.853922   17406 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 22:48:14.873862   17406 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 22:48:14.873883   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:15.270338   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:15.270783   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:15.307246   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:15.356807   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:15.770632   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:15.771171   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:15.808020   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:15.854301   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:15.856746   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:16.269987   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:16.270526   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:16.307533   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:16.356514   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:16.769934   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:16.770151   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:16.807684   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:16.856036   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:17.270438   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:17.270876   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:17.306905   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:17.356020   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:17.770021   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:17.770447   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:17.807401   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:17.856483   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:18.270001   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:18.270263   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:18.307622   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:18.353927   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:18.356795   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:18.770243   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:18.770423   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:18.807182   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:18.856355   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:19.270293   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:19.270760   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:19.306983   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:19.356333   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:19.770236   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:19.770588   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:19.806931   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:19.856132   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:20.269764   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:20.270131   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:20.307295   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:20.356684   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:20.769977   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:20.770289   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:20.806998   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:20.854017   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:20.856021   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:21.269990   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:21.270671   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:21.307571   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:21.356215   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:21.770045   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:21.770457   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:21.807411   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:21.856441   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:22.270231   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:22.270560   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:22.306891   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:22.356306   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:22.770278   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:22.770714   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:22.806988   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:22.854310   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:22.856551   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:23.270719   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:23.271080   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:23.307136   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:23.356463   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:23.770257   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:23.771091   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:23.806871   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:23.856467   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:24.270272   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:24.270697   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:24.307124   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:24.356423   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:24.770236   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:24.770810   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:24.806984   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:24.856308   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:25.270107   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:25.270486   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:25.307817   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:25.354036   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:25.356090   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:25.770115   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:25.770452   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:25.807859   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:25.856181   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:26.270067   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:26.270486   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:26.307892   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:26.356119   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:26.770110   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:26.770450   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:26.807460   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:26.856438   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:27.269982   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:27.269994   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:27.307505   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:27.356505   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:27.770059   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:27.770173   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:27.807736   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:27.854192   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:27.856498   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:28.270727   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:28.271282   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:28.307457   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:28.356581   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:28.769998   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:28.770036   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:28.808059   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:28.856145   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:29.270651   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:29.271118   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:29.307124   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:29.356565   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:29.769995   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:29.770342   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:29.807833   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:29.854293   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:29.855938   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:30.269821   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:30.270065   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:30.307631   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:30.356474   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:30.770463   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:30.770939   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:30.807015   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:30.856096   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:31.269986   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:31.270454   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:31.307836   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:31.356734   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:31.770032   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:31.770050   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:31.807504   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:31.856727   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:32.269915   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:32.270833   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:32.307963   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:32.354561   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:32.356222   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:32.770407   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:32.770750   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:32.807981   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:32.856347   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:33.271288   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:33.271840   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:33.307145   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:33.356512   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:33.770649   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:33.770939   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:33.806929   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:33.856186   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:34.270071   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:34.270606   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:34.307793   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:34.356450   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:34.770276   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:34.770646   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:34.807747   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:34.854253   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:34.856401   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:35.271929   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:35.272433   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:35.307747   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:35.356210   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:35.770015   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:35.770592   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:35.807721   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:35.856131   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:36.270003   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:36.270485   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:36.307733   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:36.356195   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:36.770066   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:36.770619   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:36.807735   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:36.856197   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:37.270033   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:37.270462   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:37.307551   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:37.353739   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:37.356249   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:37.770085   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:37.770491   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:37.807719   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:37.856832   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:38.270224   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:38.270417   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:38.307708   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:38.356674   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:38.769885   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:38.770088   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:38.807819   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:38.856128   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:39.269829   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:39.270211   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:39.307494   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:39.356698   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:39.770098   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:39.770533   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:39.807823   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:39.853947   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:39.856073   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:40.269795   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:40.270324   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:40.307346   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:40.356454   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:40.770387   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:40.770930   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:40.807199   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:40.856106   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:41.269868   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:41.270414   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:41.307660   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:41.356475   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:41.770496   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:41.770779   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:41.806765   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:41.856401   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:42.270030   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:42.270465   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:42.307901   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:42.354151   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:42.356375   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:42.770367   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:42.770789   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:42.807048   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:42.856035   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:43.269881   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:43.270408   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:43.307546   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:43.356320   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:43.770303   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:43.771009   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:43.807005   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:43.856572   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:44.270083   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:44.270266   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:44.307819   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:44.354296   17406 node_ready.go:53] node "addons-003557" has status "Ready":"False"
	I1001 22:48:44.356206   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:44.769948   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:44.770600   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:44.807719   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:44.856153   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:45.269902   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:45.270601   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:45.307650   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:45.356201   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:45.770047   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:45.770413   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:45.807516   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:45.856880   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:46.270244   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:46.270513   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:46.307171   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:46.356894   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:46.837009   17406 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 22:48:46.837099   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:46.837636   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:46.838246   17406 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 22:48:46.838308   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:46.854470   17406 node_ready.go:49] node "addons-003557" has status "Ready":"True"
	I1001 22:48:46.854499   17406 node_ready.go:38] duration metric: took 40.003787077s for node "addons-003557" to be "Ready" ...
	I1001 22:48:46.854510   17406 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 22:48:46.856967   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:46.862357   17406 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6cj4k" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:47.270868   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:47.271080   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:47.370305   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:47.371426   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:47.771279   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:47.771648   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:47.808528   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:47.859040   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:48.270472   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:48.270676   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:48.308675   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:48.357484   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:48.368144   17406 pod_ready.go:93] pod "coredns-7c65d6cfc9-6cj4k" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.368167   17406 pod_ready.go:82] duration metric: took 1.50578281s for pod "coredns-7c65d6cfc9-6cj4k" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.368192   17406 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.372085   17406 pod_ready.go:93] pod "etcd-addons-003557" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.372104   17406 pod_ready.go:82] duration metric: took 3.902562ms for pod "etcd-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.372119   17406 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.375948   17406 pod_ready.go:93] pod "kube-apiserver-addons-003557" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.375967   17406 pod_ready.go:82] duration metric: took 3.84126ms for pod "kube-apiserver-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.375975   17406 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.380086   17406 pod_ready.go:93] pod "kube-controller-manager-addons-003557" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.380108   17406 pod_ready.go:82] duration metric: took 4.126277ms for pod "kube-controller-manager-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.380119   17406 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-69j2j" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.454950   17406 pod_ready.go:93] pod "kube-proxy-69j2j" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.454976   17406 pod_ready.go:82] duration metric: took 74.851467ms for pod "kube-proxy-69j2j" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.454987   17406 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.773077   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:48.773436   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:48.808014   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:48.855063   17406 pod_ready.go:93] pod "kube-scheduler-addons-003557" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:48.855086   17406 pod_ready.go:82] duration metric: took 400.091569ms for pod "kube-scheduler-addons-003557" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.855098   17406 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:48.856702   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:49.335998   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:49.336533   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:49.336939   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:49.359877   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:49.839078   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:49.839237   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:49.839310   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:49.938791   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:50.345027   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:50.346478   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:50.347147   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:50.357494   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:50.838482   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:50.838874   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:50.839621   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:50.858886   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:50.862241   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:51.270295   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:51.270601   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:51.336927   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:51.357967   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:51.770922   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:51.771059   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:51.836453   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:51.857686   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:52.270788   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:52.270934   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:52.336063   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:52.357048   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:52.770644   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:52.770913   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:52.808138   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:52.857323   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:53.270387   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:53.270475   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:53.308919   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:53.357904   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:53.360086   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:53.770330   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:53.770687   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:53.809429   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:53.857845   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:54.270502   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:54.270498   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:54.308584   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:54.357803   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:54.770928   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:54.771192   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:54.807750   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:54.857831   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:55.270553   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:55.270717   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:55.307464   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:55.357542   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:55.360155   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:55.770389   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:55.770752   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:55.807837   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:55.857751   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:56.270662   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:56.270901   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:56.308108   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:56.357686   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:56.770335   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:56.770629   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:56.808461   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:56.857327   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:57.347712   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:57.349227   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:57.350294   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:57.358360   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:57.437893   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:57.770818   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:57.771190   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:57.836857   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:57.857499   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:58.271017   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:58.271255   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:58.308013   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:58.357136   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:58.770774   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:58.770998   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:58.808838   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:58.858035   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:59.270849   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:59.271083   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:59.308576   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:59.357554   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:59.770620   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:59.770839   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:59.808862   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:59.858005   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:59.860381   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:00.270886   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:00.271415   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:00.308468   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:00.357595   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:00.773372   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:00.773966   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:00.835696   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:00.857469   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:01.270900   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:01.271430   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:01.308330   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:01.357305   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:01.770890   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:01.771293   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:01.836611   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:01.857533   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:02.270434   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:02.270678   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:02.308450   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:02.357390   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:02.360382   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:02.773069   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:02.773263   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:02.808329   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:02.857388   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:03.271040   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:03.271342   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:03.308106   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:03.357056   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:03.770223   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:03.770477   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:03.807853   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:03.856815   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:04.270731   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:04.271510   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:04.308013   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:04.357172   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:04.770355   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:04.770476   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:04.859693   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:04.870022   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:04.870688   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:05.270706   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:05.270950   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:05.307672   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:05.357503   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:05.770328   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:05.770491   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:05.808800   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:05.857910   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.270987   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:06.271489   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:06.334933   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:06.356867   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.770692   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:06.770722   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:06.836543   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:06.857444   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.860800   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:07.270459   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:07.272101   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:07.336262   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:07.356967   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:07.770289   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:07.770373   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:07.808875   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:07.857723   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:08.270632   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:08.270824   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:08.308919   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:08.357677   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:08.770660   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:08.770930   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:08.870894   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:08.872072   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:09.270587   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:09.270885   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:09.307904   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:09.357870   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:09.360027   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:09.770574   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:09.770783   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:09.808133   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:09.857833   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:10.271803   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:10.271988   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:10.337093   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:10.357638   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:10.771386   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:10.771743   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:10.836500   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:10.857965   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:11.270803   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:11.271699   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:11.309242   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:11.357426   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:11.360351   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:11.773846   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:11.774503   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:11.808005   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:11.856894   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:12.271214   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:12.271675   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:12.371473   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:12.372812   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:12.770818   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:12.770907   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:12.808107   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:12.857108   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:13.271083   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:13.271183   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:13.308623   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:13.357278   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:13.770812   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:13.770929   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:13.807756   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:13.861712   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:13.870668   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:14.272188   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:14.272874   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:14.337169   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:14.358021   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:14.770775   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:14.772065   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:14.808123   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:14.857368   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:15.270485   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:15.270690   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:15.308475   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:15.434687   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:15.771027   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:15.771293   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:15.808035   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:15.856731   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:16.270650   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:16.270965   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:16.308084   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:16.357740   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:16.360682   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:16.772053   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:16.773234   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:16.807829   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:16.873374   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:17.270946   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:17.271240   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:17.308132   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:17.357062   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:17.770655   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:17.770984   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:17.809163   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:17.856765   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:18.270066   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:18.270460   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:18.308408   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:18.356777   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:18.771346   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:18.771772   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:18.837119   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:18.860725   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:18.936800   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:19.270448   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:19.271053   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:19.308266   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:19.357722   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:19.771255   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:19.771498   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:19.808963   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:19.858064   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:20.270811   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:20.271902   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:20.335248   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:20.357353   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:20.771141   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:20.771512   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:20.809271   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:20.857803   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:21.270737   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:21.271005   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:21.307551   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:21.360326   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:21.371789   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:21.836601   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:21.837451   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:21.838103   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:21.858607   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:22.337567   17406 kapi.go:107] duration metric: took 1m12.070234926s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 22:49:22.337994   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:22.337991   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:22.357817   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:22.845550   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:22.849243   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:22.857437   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:23.336672   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:23.337254   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:23.359618   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:23.362724   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:23.771199   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:23.836551   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:23.857822   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:24.271163   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:24.335559   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:24.357599   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:24.770485   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:24.808027   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:24.857015   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:25.270777   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:25.309052   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:25.357842   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:25.771515   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:25.808258   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:25.857186   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:25.860833   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:26.271490   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:26.308389   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:26.357774   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:26.771369   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:26.807964   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:26.857046   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:27.275532   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:27.333670   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:27.375626   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:27.771294   17406 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:27.808115   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:27.857299   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:28.270896   17406 kapi.go:107] duration metric: took 1m18.004259194s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 22:49:28.336478   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:28.357413   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:28.360085   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:28.808056   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:28.856872   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:29.309372   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:29.357060   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:29.836725   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:29.857414   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:30.308336   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:30.357183   17406 kapi.go:107] duration metric: took 1m15.503256547s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 22:49:30.359016   17406 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-003557 cluster.
	I1001 22:49:30.360475   17406 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 22:49:30.361943   17406 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 22:49:30.836931   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:30.861406   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:31.308572   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:31.809000   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:32.308347   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:32.839552   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:33.309028   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:33.360767   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:33.809095   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:34.308105   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:34.808624   17406 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:35.308503   17406 kapi.go:107] duration metric: took 1m24.004430297s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 22:49:35.311229   17406 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, storage-provisioner-rancher, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1001 22:49:35.313435   17406 addons.go:510] duration metric: took 1m30.591052269s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner storage-provisioner-rancher ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1001 22:49:35.860038   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:38.360285   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:40.360731   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:42.860662   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:44.861364   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:47.360960   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:49.361415   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:51.860345   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:53.860554   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:55.861005   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:49:58.360325   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:00.360982   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:02.860466   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:05.361169   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:07.860795   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:10.360845   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:12.361030   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:14.361251   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:16.861153   17406 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"False"
	I1001 22:50:17.360542   17406 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace has status "Ready":"True"
	I1001 22:50:17.360567   17406 pod_ready.go:82] duration metric: took 1m28.505461068s for pod "metrics-server-84c5f94fbc-zjg7c" in "kube-system" namespace to be "Ready" ...
	I1001 22:50:17.360580   17406 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lqq7d" in "kube-system" namespace to be "Ready" ...
	I1001 22:50:17.364938   17406 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-lqq7d" in "kube-system" namespace has status "Ready":"True"
	I1001 22:50:17.364959   17406 pod_ready.go:82] duration metric: took 4.371492ms for pod "nvidia-device-plugin-daemonset-lqq7d" in "kube-system" namespace to be "Ready" ...
	I1001 22:50:17.364976   17406 pod_ready.go:39] duration metric: took 1m30.510453662s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 22:50:17.364992   17406 api_server.go:52] waiting for apiserver process to appear ...
	I1001 22:50:17.365019   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 22:50:17.365069   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 22:50:17.398297   17406 cri.go:89] found id: "87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:17.398316   17406 cri.go:89] found id: ""
	I1001 22:50:17.398323   17406 logs.go:282] 1 containers: [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f]
	I1001 22:50:17.398363   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.401435   17406 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 22:50:17.401495   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 22:50:17.433832   17406 cri.go:89] found id: "e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:17.433858   17406 cri.go:89] found id: ""
	I1001 22:50:17.433868   17406 logs.go:282] 1 containers: [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f]
	I1001 22:50:17.433927   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.437182   17406 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 22:50:17.437254   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 22:50:17.470922   17406 cri.go:89] found id: "d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:17.470943   17406 cri.go:89] found id: ""
	I1001 22:50:17.470951   17406 logs.go:282] 1 containers: [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955]
	I1001 22:50:17.471003   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.474323   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 22:50:17.474391   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 22:50:17.507143   17406 cri.go:89] found id: "8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:17.507167   17406 cri.go:89] found id: ""
	I1001 22:50:17.507174   17406 logs.go:282] 1 containers: [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca]
	I1001 22:50:17.507247   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.510488   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 22:50:17.510547   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 22:50:17.543241   17406 cri.go:89] found id: "1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:17.543264   17406 cri.go:89] found id: ""
	I1001 22:50:17.543274   17406 logs.go:282] 1 containers: [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1]
	I1001 22:50:17.543341   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.546521   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 22:50:17.546585   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 22:50:17.579896   17406 cri.go:89] found id: "7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:17.579916   17406 cri.go:89] found id: ""
	I1001 22:50:17.579925   17406 logs.go:282] 1 containers: [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101]
	I1001 22:50:17.579965   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.583212   17406 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 22:50:17.583278   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 22:50:17.616058   17406 cri.go:89] found id: "6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:17.616086   17406 cri.go:89] found id: ""
	I1001 22:50:17.616096   17406 logs.go:282] 1 containers: [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5]
	I1001 22:50:17.616147   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:17.619409   17406 logs.go:123] Gathering logs for container status ...
	I1001 22:50:17.619436   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 22:50:17.660545   17406 logs.go:123] Gathering logs for kubelet ...
	I1001 22:50:17.660572   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 22:50:17.746011   17406 logs.go:123] Gathering logs for describe nodes ...
	I1001 22:50:17.746044   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 22:50:17.842009   17406 logs.go:123] Gathering logs for kube-apiserver [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f] ...
	I1001 22:50:17.842040   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:17.885176   17406 logs.go:123] Gathering logs for coredns [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955] ...
	I1001 22:50:17.885206   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:17.919459   17406 logs.go:123] Gathering logs for kindnet [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5] ...
	I1001 22:50:17.919491   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:17.953281   17406 logs.go:123] Gathering logs for CRI-O ...
	I1001 22:50:17.953312   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 22:50:18.025721   17406 logs.go:123] Gathering logs for dmesg ...
	I1001 22:50:18.025755   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 22:50:18.038322   17406 logs.go:123] Gathering logs for etcd [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f] ...
	I1001 22:50:18.038357   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:18.088439   17406 logs.go:123] Gathering logs for kube-scheduler [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca] ...
	I1001 22:50:18.088475   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:18.127405   17406 logs.go:123] Gathering logs for kube-proxy [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1] ...
	I1001 22:50:18.127444   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:18.161259   17406 logs.go:123] Gathering logs for kube-controller-manager [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101] ...
	I1001 22:50:18.161286   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:20.716346   17406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 22:50:20.729648   17406 api_server.go:72] duration metric: took 2m16.007309449s to wait for apiserver process to appear ...
	I1001 22:50:20.729676   17406 api_server.go:88] waiting for apiserver healthz status ...
	I1001 22:50:20.729734   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 22:50:20.729781   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 22:50:20.761825   17406 cri.go:89] found id: "87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:20.761846   17406 cri.go:89] found id: ""
	I1001 22:50:20.761854   17406 logs.go:282] 1 containers: [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f]
	I1001 22:50:20.761897   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.765056   17406 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 22:50:20.765117   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 22:50:20.798095   17406 cri.go:89] found id: "e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:20.798121   17406 cri.go:89] found id: ""
	I1001 22:50:20.798131   17406 logs.go:282] 1 containers: [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f]
	I1001 22:50:20.798175   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.801381   17406 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 22:50:20.801441   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 22:50:20.833580   17406 cri.go:89] found id: "d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:20.833602   17406 cri.go:89] found id: ""
	I1001 22:50:20.833611   17406 logs.go:282] 1 containers: [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955]
	I1001 22:50:20.833659   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.836924   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 22:50:20.836978   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 22:50:20.870184   17406 cri.go:89] found id: "8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:20.870207   17406 cri.go:89] found id: ""
	I1001 22:50:20.870218   17406 logs.go:282] 1 containers: [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca]
	I1001 22:50:20.870265   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.873386   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 22:50:20.873448   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 22:50:20.906120   17406 cri.go:89] found id: "1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:20.906145   17406 cri.go:89] found id: ""
	I1001 22:50:20.906153   17406 logs.go:282] 1 containers: [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1]
	I1001 22:50:20.906210   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.909798   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 22:50:20.909856   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 22:50:20.942582   17406 cri.go:89] found id: "7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:20.942607   17406 cri.go:89] found id: ""
	I1001 22:50:20.942616   17406 logs.go:282] 1 containers: [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101]
	I1001 22:50:20.942662   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.945891   17406 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 22:50:20.945948   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 22:50:20.979406   17406 cri.go:89] found id: "6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:20.979425   17406 cri.go:89] found id: ""
	I1001 22:50:20.979440   17406 logs.go:282] 1 containers: [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5]
	I1001 22:50:20.979482   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:20.983203   17406 logs.go:123] Gathering logs for dmesg ...
	I1001 22:50:20.983232   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 22:50:20.995293   17406 logs.go:123] Gathering logs for kube-apiserver [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f] ...
	I1001 22:50:20.995318   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:21.038301   17406 logs.go:123] Gathering logs for coredns [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955] ...
	I1001 22:50:21.038336   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:21.074335   17406 logs.go:123] Gathering logs for kube-scheduler [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca] ...
	I1001 22:50:21.074365   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:21.113455   17406 logs.go:123] Gathering logs for kube-proxy [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1] ...
	I1001 22:50:21.113484   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:21.145704   17406 logs.go:123] Gathering logs for CRI-O ...
	I1001 22:50:21.145730   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 22:50:21.217978   17406 logs.go:123] Gathering logs for kubelet ...
	I1001 22:50:21.218016   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 22:50:21.299409   17406 logs.go:123] Gathering logs for describe nodes ...
	I1001 22:50:21.299444   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 22:50:21.395877   17406 logs.go:123] Gathering logs for etcd [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f] ...
	I1001 22:50:21.395909   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:21.444896   17406 logs.go:123] Gathering logs for kube-controller-manager [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101] ...
	I1001 22:50:21.444933   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:21.500566   17406 logs.go:123] Gathering logs for kindnet [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5] ...
	I1001 22:50:21.500599   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:21.535184   17406 logs.go:123] Gathering logs for container status ...
	I1001 22:50:21.535223   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 22:50:24.075896   17406 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1001 22:50:24.080316   17406 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1001 22:50:24.081179   17406 api_server.go:141] control plane version: v1.31.1
	I1001 22:50:24.081202   17406 api_server.go:131] duration metric: took 3.351518463s to wait for apiserver health ...
	I1001 22:50:24.081210   17406 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 22:50:24.081253   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 22:50:24.081298   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 22:50:24.115343   17406 cri.go:89] found id: "87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:24.115365   17406 cri.go:89] found id: ""
	I1001 22:50:24.115373   17406 logs.go:282] 1 containers: [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f]
	I1001 22:50:24.115415   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.118584   17406 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 22:50:24.118649   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 22:50:24.151635   17406 cri.go:89] found id: "e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:24.151658   17406 cri.go:89] found id: ""
	I1001 22:50:24.151666   17406 logs.go:282] 1 containers: [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f]
	I1001 22:50:24.151707   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.154924   17406 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 22:50:24.154990   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 22:50:24.187218   17406 cri.go:89] found id: "d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:24.187244   17406 cri.go:89] found id: ""
	I1001 22:50:24.187252   17406 logs.go:282] 1 containers: [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955]
	I1001 22:50:24.187293   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.190608   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 22:50:24.190666   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 22:50:24.222899   17406 cri.go:89] found id: "8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:24.222920   17406 cri.go:89] found id: ""
	I1001 22:50:24.222930   17406 logs.go:282] 1 containers: [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca]
	I1001 22:50:24.222983   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.226303   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 22:50:24.226358   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 22:50:24.259453   17406 cri.go:89] found id: "1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:24.259475   17406 cri.go:89] found id: ""
	I1001 22:50:24.259483   17406 logs.go:282] 1 containers: [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1]
	I1001 22:50:24.259573   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.262913   17406 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 22:50:24.262976   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 22:50:24.297869   17406 cri.go:89] found id: "7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:24.297896   17406 cri.go:89] found id: ""
	I1001 22:50:24.297904   17406 logs.go:282] 1 containers: [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101]
	I1001 22:50:24.297945   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.301077   17406 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 22:50:24.301142   17406 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 22:50:24.333856   17406 cri.go:89] found id: "6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:24.333880   17406 cri.go:89] found id: ""
	I1001 22:50:24.333887   17406 logs.go:282] 1 containers: [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5]
	I1001 22:50:24.333940   17406 ssh_runner.go:195] Run: which crictl
	I1001 22:50:24.337244   17406 logs.go:123] Gathering logs for kube-apiserver [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f] ...
	I1001 22:50:24.337267   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f"
	I1001 22:50:24.379044   17406 logs.go:123] Gathering logs for etcd [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f] ...
	I1001 22:50:24.379076   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f"
	I1001 22:50:24.425711   17406 logs.go:123] Gathering logs for coredns [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955] ...
	I1001 22:50:24.425743   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955"
	I1001 22:50:24.461124   17406 logs.go:123] Gathering logs for kube-proxy [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1] ...
	I1001 22:50:24.461153   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1"
	I1001 22:50:24.494069   17406 logs.go:123] Gathering logs for kindnet [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5] ...
	I1001 22:50:24.494106   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5"
	I1001 22:50:24.528045   17406 logs.go:123] Gathering logs for container status ...
	I1001 22:50:24.528072   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 22:50:24.568370   17406 logs.go:123] Gathering logs for kubelet ...
	I1001 22:50:24.568400   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 22:50:24.646437   17406 logs.go:123] Gathering logs for dmesg ...
	I1001 22:50:24.646466   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 22:50:24.658903   17406 logs.go:123] Gathering logs for describe nodes ...
	I1001 22:50:24.658929   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 22:50:24.754724   17406 logs.go:123] Gathering logs for kube-scheduler [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca] ...
	I1001 22:50:24.754763   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca"
	I1001 22:50:24.794371   17406 logs.go:123] Gathering logs for kube-controller-manager [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101] ...
	I1001 22:50:24.794403   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101"
	I1001 22:50:24.852996   17406 logs.go:123] Gathering logs for CRI-O ...
	I1001 22:50:24.853034   17406 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 22:50:27.432706   17406 system_pods.go:59] 18 kube-system pods found
	I1001 22:50:27.432747   17406 system_pods.go:61] "coredns-7c65d6cfc9-6cj4k" [bf0ce726-12dc-4a3b-bc7f-32b08162b072] Running
	I1001 22:50:27.432755   17406 system_pods.go:61] "csi-hostpath-attacher-0" [4d8a45e2-a65d-4d14-a0a5-61b0459194c8] Running
	I1001 22:50:27.432760   17406 system_pods.go:61] "csi-hostpath-resizer-0" [22ade8db-a4ea-45b8-99b2-fe431c97ecbb] Running
	I1001 22:50:27.432763   17406 system_pods.go:61] "csi-hostpathplugin-9hpwk" [9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c] Running
	I1001 22:50:27.432766   17406 system_pods.go:61] "etcd-addons-003557" [7b441ad9-1688-4020-b102-e367f37ff777] Running
	I1001 22:50:27.432770   17406 system_pods.go:61] "kindnet-8kp67" [c4f42c69-ca3e-4f26-acba-51f300a26d2e] Running
	I1001 22:50:27.432773   17406 system_pods.go:61] "kube-apiserver-addons-003557" [b68e3152-1b85-418a-acc5-62457bc07f17] Running
	I1001 22:50:27.432776   17406 system_pods.go:61] "kube-controller-manager-addons-003557" [fc7f4c52-6e3f-4f19-a2e0-965c373e53d9] Running
	I1001 22:50:27.432780   17406 system_pods.go:61] "kube-ingress-dns-minikube" [c5f5b225-07e7-4b1f-ad82-4e969170fdf5] Running
	I1001 22:50:27.432784   17406 system_pods.go:61] "kube-proxy-69j2j" [fb59b533-053f-480b-85dd-f485ad873034] Running
	I1001 22:50:27.432789   17406 system_pods.go:61] "kube-scheduler-addons-003557" [84ef7897-c5a4-4d4f-a0b7-3945ae61ea50] Running
	I1001 22:50:27.432792   17406 system_pods.go:61] "metrics-server-84c5f94fbc-zjg7c" [f8da0c14-1d24-402d-bcbd-d93fe9f23cc3] Running
	I1001 22:50:27.432796   17406 system_pods.go:61] "nvidia-device-plugin-daemonset-lqq7d" [96398da4-ed1b-465f-b551-4e9610a5a0b8] Running
	I1001 22:50:27.432799   17406 system_pods.go:61] "registry-66c9cd494c-nfhms" [6ea2ddd1-36cb-436c-8115-e19051d864b9] Running
	I1001 22:50:27.432802   17406 system_pods.go:61] "registry-proxy-b56zl" [927b1333-9d83-4da6-a33d-da374985f3f3] Running
	I1001 22:50:27.432806   17406 system_pods.go:61] "snapshot-controller-56fcc65765-r564p" [d04ff238-8840-4705-b74a-704495659229] Running
	I1001 22:50:27.432812   17406 system_pods.go:61] "snapshot-controller-56fcc65765-x5lsc" [f059396a-b1f2-4395-91fa-a812a3df93ca] Running
	I1001 22:50:27.432815   17406 system_pods.go:61] "storage-provisioner" [20cd141c-f893-40e3-ab7d-39590c85f67d] Running
	I1001 22:50:27.432820   17406 system_pods.go:74] duration metric: took 3.35158191s to wait for pod list to return data ...
	I1001 22:50:27.432830   17406 default_sa.go:34] waiting for default service account to be created ...
	I1001 22:50:27.435490   17406 default_sa.go:45] found service account: "default"
	I1001 22:50:27.435513   17406 default_sa.go:55] duration metric: took 2.67801ms for default service account to be created ...
	I1001 22:50:27.435526   17406 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 22:50:27.444043   17406 system_pods.go:86] 18 kube-system pods found
	I1001 22:50:27.444077   17406 system_pods.go:89] "coredns-7c65d6cfc9-6cj4k" [bf0ce726-12dc-4a3b-bc7f-32b08162b072] Running
	I1001 22:50:27.444083   17406 system_pods.go:89] "csi-hostpath-attacher-0" [4d8a45e2-a65d-4d14-a0a5-61b0459194c8] Running
	I1001 22:50:27.444087   17406 system_pods.go:89] "csi-hostpath-resizer-0" [22ade8db-a4ea-45b8-99b2-fe431c97ecbb] Running
	I1001 22:50:27.444093   17406 system_pods.go:89] "csi-hostpathplugin-9hpwk" [9a0b908b-663b-4aa8-a4ba-c064f7fb8e3c] Running
	I1001 22:50:27.444097   17406 system_pods.go:89] "etcd-addons-003557" [7b441ad9-1688-4020-b102-e367f37ff777] Running
	I1001 22:50:27.444100   17406 system_pods.go:89] "kindnet-8kp67" [c4f42c69-ca3e-4f26-acba-51f300a26d2e] Running
	I1001 22:50:27.444105   17406 system_pods.go:89] "kube-apiserver-addons-003557" [b68e3152-1b85-418a-acc5-62457bc07f17] Running
	I1001 22:50:27.444109   17406 system_pods.go:89] "kube-controller-manager-addons-003557" [fc7f4c52-6e3f-4f19-a2e0-965c373e53d9] Running
	I1001 22:50:27.444113   17406 system_pods.go:89] "kube-ingress-dns-minikube" [c5f5b225-07e7-4b1f-ad82-4e969170fdf5] Running
	I1001 22:50:27.444116   17406 system_pods.go:89] "kube-proxy-69j2j" [fb59b533-053f-480b-85dd-f485ad873034] Running
	I1001 22:50:27.444120   17406 system_pods.go:89] "kube-scheduler-addons-003557" [84ef7897-c5a4-4d4f-a0b7-3945ae61ea50] Running
	I1001 22:50:27.444124   17406 system_pods.go:89] "metrics-server-84c5f94fbc-zjg7c" [f8da0c14-1d24-402d-bcbd-d93fe9f23cc3] Running
	I1001 22:50:27.444127   17406 system_pods.go:89] "nvidia-device-plugin-daemonset-lqq7d" [96398da4-ed1b-465f-b551-4e9610a5a0b8] Running
	I1001 22:50:27.444131   17406 system_pods.go:89] "registry-66c9cd494c-nfhms" [6ea2ddd1-36cb-436c-8115-e19051d864b9] Running
	I1001 22:50:27.444135   17406 system_pods.go:89] "registry-proxy-b56zl" [927b1333-9d83-4da6-a33d-da374985f3f3] Running
	I1001 22:50:27.444138   17406 system_pods.go:89] "snapshot-controller-56fcc65765-r564p" [d04ff238-8840-4705-b74a-704495659229] Running
	I1001 22:50:27.444143   17406 system_pods.go:89] "snapshot-controller-56fcc65765-x5lsc" [f059396a-b1f2-4395-91fa-a812a3df93ca] Running
	I1001 22:50:27.444146   17406 system_pods.go:89] "storage-provisioner" [20cd141c-f893-40e3-ab7d-39590c85f67d] Running
	I1001 22:50:27.444152   17406 system_pods.go:126] duration metric: took 8.621679ms to wait for k8s-apps to be running ...
	I1001 22:50:27.444161   17406 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 22:50:27.444210   17406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 22:50:27.455667   17406 system_svc.go:56] duration metric: took 11.494257ms WaitForService to wait for kubelet
	I1001 22:50:27.455700   17406 kubeadm.go:582] duration metric: took 2m22.733365482s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 22:50:27.455724   17406 node_conditions.go:102] verifying NodePressure condition ...
	I1001 22:50:27.458744   17406 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1001 22:50:27.458792   17406 node_conditions.go:123] node cpu capacity is 8
	I1001 22:50:27.458812   17406 node_conditions.go:105] duration metric: took 3.081332ms to run NodePressure ...
	I1001 22:50:27.458828   17406 start.go:241] waiting for startup goroutines ...
	I1001 22:50:27.458837   17406 start.go:246] waiting for cluster config update ...
	I1001 22:50:27.458859   17406 start.go:255] writing updated cluster config ...
	I1001 22:50:27.459165   17406 ssh_runner.go:195] Run: rm -f paused
	I1001 22:50:27.508443   17406 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 22:50:27.510465   17406 out.go:177] * Done! kubectl is now configured to use "addons-003557" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 23:01:48 addons-003557 crio[1033]: time="2024-10-01 23:01:48.512136908Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-bmzvg Namespace:ingress-nginx ID:1a9ab1c4598221c5c746f92c653168ef651004beef9e865282df12fac4e4b9d0 UID:0af22e25-7ea9-45c5-b387-1eb67e1f1b99 NetNS:/var/run/netns/a920dc86-12cf-4670-9dda-9cf34bf2174b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 01 23:01:48 addons-003557 crio[1033]: time="2024-10-01 23:01:48.512250243Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-bmzvg from CNI network \"kindnet\" (type=ptp)"
	Oct 01 23:01:48 addons-003557 crio[1033]: time="2024-10-01 23:01:48.546179639Z" level=info msg="Stopped pod sandbox: 1a9ab1c4598221c5c746f92c653168ef651004beef9e865282df12fac4e4b9d0" id=5b8e8065-b033-45ad-8767-50b430ca8dbb name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 23:01:48 addons-003557 crio[1033]: time="2024-10-01 23:01:48.844112727Z" level=info msg="Removing container: 2948a71e700fb48c3ebb247a8adf50d480ce47e75df7707801e88064164584d6" id=22e1794c-77d9-4acc-b445-66b9af0e911e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 01 23:01:48 addons-003557 crio[1033]: time="2024-10-01 23:01:48.857107858Z" level=info msg="Removed container 2948a71e700fb48c3ebb247a8adf50d480ce47e75df7707801e88064164584d6: ingress-nginx/ingress-nginx-controller-bc57996ff-bmzvg/controller" id=22e1794c-77d9-4acc-b445-66b9af0e911e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.890611599Z" level=info msg="Removing container: 3b788fdf0d4ff58dc011c496bbd51c111c3df437addbfa5062fce16ee5722b12" id=2c0662b9-8a1b-4347-a8fe-455c7d798f93 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.904310839Z" level=info msg="Removed container 3b788fdf0d4ff58dc011c496bbd51c111c3df437addbfa5062fce16ee5722b12: ingress-nginx/ingress-nginx-admission-patch-9cw6h/patch" id=2c0662b9-8a1b-4347-a8fe-455c7d798f93 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.905618048Z" level=info msg="Removing container: 6430ee5d1d00b6e455e91b242765f870a7ddd83fd5ccaccd289942d880ca9e87" id=c1868f33-9c16-4c1b-86e7-5a3e7c564ae3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.918561405Z" level=info msg="Removed container 6430ee5d1d00b6e455e91b242765f870a7ddd83fd5ccaccd289942d880ca9e87: ingress-nginx/ingress-nginx-admission-create-xvl5n/create" id=c1868f33-9c16-4c1b-86e7-5a3e7c564ae3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.919811471Z" level=info msg="Stopping pod sandbox: 1a9ab1c4598221c5c746f92c653168ef651004beef9e865282df12fac4e4b9d0" id=62fd384f-edc6-4ae0-a53f-2ffcb11fead1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.919853741Z" level=info msg="Stopped pod sandbox (already stopped): 1a9ab1c4598221c5c746f92c653168ef651004beef9e865282df12fac4e4b9d0" id=62fd384f-edc6-4ae0-a53f-2ffcb11fead1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.920107417Z" level=info msg="Removing pod sandbox: 1a9ab1c4598221c5c746f92c653168ef651004beef9e865282df12fac4e4b9d0" id=81eeed64-1610-44b9-afb3-d6422093042a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.925368301Z" level=info msg="Removed pod sandbox: 1a9ab1c4598221c5c746f92c653168ef651004beef9e865282df12fac4e4b9d0" id=81eeed64-1610-44b9-afb3-d6422093042a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.925700391Z" level=info msg="Stopping pod sandbox: df31a94b355c15abf38032232329c038687623cd9edafa684b543a2b9aa8255a" id=ed04a855-8de6-49be-9623-f425952d1e79 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.925738401Z" level=info msg="Stopped pod sandbox (already stopped): df31a94b355c15abf38032232329c038687623cd9edafa684b543a2b9aa8255a" id=ed04a855-8de6-49be-9623-f425952d1e79 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.926043910Z" level=info msg="Removing pod sandbox: df31a94b355c15abf38032232329c038687623cd9edafa684b543a2b9aa8255a" id=d69a4064-1b56-422e-953c-d3d5d58fb27c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.931876923Z" level=info msg="Removed pod sandbox: df31a94b355c15abf38032232329c038687623cd9edafa684b543a2b9aa8255a" id=d69a4064-1b56-422e-953c-d3d5d58fb27c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.932277285Z" level=info msg="Stopping pod sandbox: e51071ef17b5a3b06c48f1e8d8e5edba16f29302b2be0421d4f82ceb8c3da3e4" id=3a6ce513-f0b8-45a6-b47d-aeb6990fd13d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.932315198Z" level=info msg="Stopped pod sandbox (already stopped): e51071ef17b5a3b06c48f1e8d8e5edba16f29302b2be0421d4f82ceb8c3da3e4" id=3a6ce513-f0b8-45a6-b47d-aeb6990fd13d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.932559579Z" level=info msg="Removing pod sandbox: e51071ef17b5a3b06c48f1e8d8e5edba16f29302b2be0421d4f82ceb8c3da3e4" id=61e339ac-bb8e-437d-bb10-73b1194e21c3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.937997547Z" level=info msg="Removed pod sandbox: e51071ef17b5a3b06c48f1e8d8e5edba16f29302b2be0421d4f82ceb8c3da3e4" id=61e339ac-bb8e-437d-bb10-73b1194e21c3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.938399839Z" level=info msg="Stopping pod sandbox: 727bf92a44730344e3e94b4258b2c4e8797fe67e3c8a59865c480f5db5a1c477" id=d1bc77c4-3fd4-4239-9976-50af3183fcb0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.938440579Z" level=info msg="Stopped pod sandbox (already stopped): 727bf92a44730344e3e94b4258b2c4e8797fe67e3c8a59865c480f5db5a1c477" id=d1bc77c4-3fd4-4239-9976-50af3183fcb0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.938718242Z" level=info msg="Removing pod sandbox: 727bf92a44730344e3e94b4258b2c4e8797fe67e3c8a59865c480f5db5a1c477" id=58eec71f-3c21-4036-9003-097a40fa2e69 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 23:01:58 addons-003557 crio[1033]: time="2024-10-01 23:01:58.944958681Z" level=info msg="Removed pod sandbox: 727bf92a44730344e3e94b4258b2c4e8797fe67e3c8a59865c480f5db5a1c477" id=58eec71f-3c21-4036-9003-097a40fa2e69 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4dc42d3fdc56b       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   0202013e72854       hello-world-app-55bf9c44b4-m668q
	498059f035429       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   ae208a4146187       busybox
	2bd3c57ae8b07       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago       Running             nginx                     0                   73e2c96114943       nginx
	1f273bc376231       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   89e385fcf8343       metrics-server-84c5f94fbc-zjg7c
	f56a3f3f14126       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   2970730c13a9c       storage-provisioner
	d0bb0bd096899       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   71e46a2f64bf1       coredns-7c65d6cfc9-6cj4k
	6c8c4e3c950ae       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                        16 minutes ago      Running             kindnet-cni               0                   c9b5341189508       kindnet-8kp67
	1dd0b703f1047       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        16 minutes ago      Running             kube-proxy                0                   a3b24715e9571       kube-proxy-69j2j
	87752c9368125       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        16 minutes ago      Running             kube-apiserver            0                   e698fbb913e3d       kube-apiserver-addons-003557
	e2475f6c19b3e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        16 minutes ago      Running             etcd                      0                   d39ade4a2c9ba       etcd-addons-003557
	8cfd176ea2dd2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        16 minutes ago      Running             kube-scheduler            0                   721511daf2b97       kube-scheduler-addons-003557
	7c7968828a881       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        16 minutes ago      Running             kube-controller-manager   0                   4cc08c42bd513       kube-controller-manager-addons-003557
	
	
	==> coredns [d0bb0bd09689912eaec4ea0833ebd794e0ebca64f2c5b2ffba449ba5af187955] <==
	[INFO] 10.244.0.19:49497 - 25815 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007684599s
	[INFO] 10.244.0.19:35833 - 49258 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006297617s
	[INFO] 10.244.0.19:57840 - 7152 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006345597s
	[INFO] 10.244.0.19:53289 - 34990 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006555821s
	[INFO] 10.244.0.19:40075 - 53808 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006401573s
	[INFO] 10.244.0.19:56035 - 3265 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006449625s
	[INFO] 10.244.0.19:47848 - 61108 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006533239s
	[INFO] 10.244.0.19:49497 - 27593 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005898062s
	[INFO] 10.244.0.19:48293 - 31045 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006609262s
	[INFO] 10.244.0.19:48293 - 55290 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006798221s
	[INFO] 10.244.0.19:35833 - 43096 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007401997s
	[INFO] 10.244.0.19:49497 - 13016 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007109456s
	[INFO] 10.244.0.19:47848 - 51206 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00722662s
	[INFO] 10.244.0.19:53289 - 60239 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007385727s
	[INFO] 10.244.0.19:49497 - 59337 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052994s
	[INFO] 10.244.0.19:40075 - 19197 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007478783s
	[INFO] 10.244.0.19:48293 - 33863 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000315765s
	[INFO] 10.244.0.19:56035 - 13739 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.0075664s
	[INFO] 10.244.0.19:35833 - 51642 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000237842s
	[INFO] 10.244.0.19:47848 - 10553 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000215993s
	[INFO] 10.244.0.19:57840 - 377 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007792561s
	[INFO] 10.244.0.19:53289 - 4341 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000195378s
	[INFO] 10.244.0.19:56035 - 54899 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072263s
	[INFO] 10.244.0.19:57840 - 34812 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051234s
	[INFO] 10.244.0.19:40075 - 13669 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000136116s
	
	
	==> describe nodes <==
	Name:               addons-003557
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-003557
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=addons-003557
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T22_47_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-003557
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 22:47:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-003557
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:02:06 +0000   Tue, 01 Oct 2024 22:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:02:06 +0000   Tue, 01 Oct 2024 22:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:02:06 +0000   Tue, 01 Oct 2024 22:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:02:06 +0000   Tue, 01 Oct 2024 22:48:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-003557
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 9ed29642d6d04c75b61a758c77e1d89f
	  System UUID:                f2297c5c-adbc-484a-bd16-a1531a553d6e
	  Boot ID:                    47cfe39a-81d3-44ee-8311-5ab31cab672f
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-m668q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 coredns-7c65d6cfc9-6cj4k                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-addons-003557                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-8kp67                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-addons-003557             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-003557    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-69j2j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-003557             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-84c5f94fbc-zjg7c          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         16m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node addons-003557 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node addons-003557 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node addons-003557 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m                kubelet          Node addons-003557 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                kubelet          Node addons-003557 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                kubelet          Node addons-003557 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node addons-003557 event: Registered Node addons-003557 in Controller
	  Normal   NodeReady                15m                kubelet          Node addons-003557 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000749] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000737] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000717] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000641] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000640] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000640] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000724] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.651075] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.054572] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.030598] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.215174] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 1 22:59] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[  +1.003835] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[  +2.015821] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[  +4.255579] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[  +8.191302] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[Oct 1 23:00] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	[ +32.513132] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 87 fc 78 93 fb de 22 25 73 a7 cf 08 00
	
	
	==> etcd [e2475f6c19b3e4c282ddc06ac24e2e3bcd191ae2ab8bb235463eb8eb5dc7966f] <==
	{"level":"warn","ts":"2024-10-01T22:48:08.042208Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.2984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-addons-003557\" ","response":"range_response_count:1 size:7632"}
	{"level":"info","ts":"2024-10-01T22:48:08.042287Z","caller":"traceutil/trace.go:171","msg":"trace[1949431822] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-addons-003557; range_end:; response_count:1; response_revision:422; }","duration":"105.377422ms","start":"2024-10-01T22:48:07.936901Z","end":"2024-10-01T22:48:08.042278Z","steps":["trace[1949431822] 'agreement among raft nodes before linearized reading'  (duration: 105.274394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.042440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.041417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:48:08.042493Z","caller":"traceutil/trace.go:171","msg":"trace[619821307] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:422; }","duration":"309.096108ms","start":"2024-10-01T22:48:07.733390Z","end":"2024-10-01T22:48:08.042486Z","steps":["trace[619821307] 'agreement among raft nodes before linearized reading'  (duration: 309.027271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.042544Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:48:07.733351Z","time spent":"309.18501ms","remote":"127.0.0.1:50540","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/kube-system/registry\" "}
	{"level":"warn","ts":"2024-10-01T22:48:08.042690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"404.064854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-vt4wh\" ","response":"range_response_count:1 size:3993"}
	{"level":"info","ts":"2024-10-01T22:48:08.042743Z","caller":"traceutil/trace.go:171","msg":"trace[1229854174] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-vt4wh; range_end:; response_count:1; response_revision:422; }","duration":"404.117853ms","start":"2024-10-01T22:48:07.638618Z","end":"2024-10-01T22:48:08.042736Z","steps":["trace[1229854174] 'agreement among raft nodes before linearized reading'  (duration: 404.04266ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.042790Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:48:07.638603Z","time spent":"404.1809ms","remote":"127.0.0.1:50280","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4017,"request content":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-vt4wh\" "}
	{"level":"warn","ts":"2024-10-01T22:48:08.042967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"404.38146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3144"}
	{"level":"info","ts":"2024-10-01T22:48:08.043022Z","caller":"traceutil/trace.go:171","msg":"trace[1034155169] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:422; }","duration":"404.435476ms","start":"2024-10-01T22:48:07.638579Z","end":"2024-10-01T22:48:08.043014Z","steps":["trace[1034155169] 'agreement among raft nodes before linearized reading'  (duration: 404.359529ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.043079Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:48:07.638539Z","time spent":"404.533248ms","remote":"127.0.0.1:50540","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":1,"response size":3168,"request content":"key:\"/registry/deployments/default/cloud-spanner-emulator\" "}
	{"level":"warn","ts":"2024-10-01T22:48:08.043214Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"407.535296ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:48:08.043279Z","caller":"traceutil/trace.go:171","msg":"trace[690083644] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:422; }","duration":"407.600156ms","start":"2024-10-01T22:48:07.635671Z","end":"2024-10-01T22:48:08.043271Z","steps":["trace[690083644] 'agreement among raft nodes before linearized reading'  (duration: 407.523911ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:48:08.145870Z","caller":"traceutil/trace.go:171","msg":"trace[1573253521] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"101.911502ms","start":"2024-10-01T22:48:08.043940Z","end":"2024-10-01T22:48:08.145851Z","steps":["trace[1573253521] 'process raft request'  (duration: 96.492529ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:48:08.146592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.506136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"info","ts":"2024-10-01T22:48:08.148168Z","caller":"traceutil/trace.go:171","msg":"trace[313351885] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:427; }","duration":"104.085949ms","start":"2024-10-01T22:48:08.044066Z","end":"2024-10-01T22:48:08.148152Z","steps":["trace[313351885] 'agreement among raft nodes before linearized reading'  (duration: 102.489858ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:49:38.865298Z","caller":"traceutil/trace.go:171","msg":"trace[1592076025] transaction","detail":"{read_only:false; response_revision:1197; number_of_response:1; }","duration":"110.479447ms","start":"2024-10-01T22:49:38.754796Z","end":"2024-10-01T22:49:38.865275Z","steps":["trace[1592076025] 'process raft request'  (duration: 110.298004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:49:50.165142Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.700411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-01T22:49:50.165212Z","caller":"traceutil/trace.go:171","msg":"trace[1736816869] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1219; }","duration":"108.782713ms","start":"2024-10-01T22:49:50.056414Z","end":"2024-10-01T22:49:50.165196Z","steps":["trace[1736816869] 'range keys from in-memory index tree'  (duration: 108.568939ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:57:55.062972Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1534}
	{"level":"info","ts":"2024-10-01T22:57:55.084720Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1534,"took":"21.290473ms","hash":3361936497,"current-db-size-bytes":6246400,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3198976,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-10-01T22:57:55.084768Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3361936497,"revision":1534,"compact-revision":-1}
	{"level":"info","ts":"2024-10-01T23:02:55.068131Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1952}
	{"level":"info","ts":"2024-10-01T23:02:55.084734Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1952,"took":"16.136088ms","hash":104986429,"current-db-size-bytes":6246400,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":4943872,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-10-01T23:02:55.084783Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":104986429,"revision":1952,"compact-revision":1534}
	
	
	==> kernel <==
	 23:04:09 up 46 min,  0 users,  load average: 0.38, 0.26, 0.28
	Linux addons-003557 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6c8c4e3c950ae991bcf5ad4d06e9a3b14620e65789db6ba27c33b401bd831cf5] <==
	I1001 23:02:06.240111       1 main.go:299] handling current node
	I1001 23:02:16.244708       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:02:16.244742       1 main.go:299] handling current node
	I1001 23:02:26.239330       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:02:26.239373       1 main.go:299] handling current node
	I1001 23:02:36.239878       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:02:36.239925       1 main.go:299] handling current node
	I1001 23:02:46.243808       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:02:46.243851       1 main.go:299] handling current node
	I1001 23:02:56.240728       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:02:56.240773       1 main.go:299] handling current node
	I1001 23:03:06.239575       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:03:06.239604       1 main.go:299] handling current node
	I1001 23:03:16.240722       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:03:16.240756       1 main.go:299] handling current node
	I1001 23:03:26.239372       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:03:26.239420       1 main.go:299] handling current node
	I1001 23:03:36.247995       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:03:36.248038       1 main.go:299] handling current node
	I1001 23:03:46.246350       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:03:46.246382       1 main.go:299] handling current node
	I1001 23:03:56.240784       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:03:56.240837       1 main.go:299] handling current node
	I1001 23:04:06.239152       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:04:06.239197       1 main.go:299] handling current node
	
	
	==> kube-apiserver [87752c9368125c717fc263f1221f034a5560fcceaeeb7ff353f390779805b03f] <==
	I1001 22:50:17.280293       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1001 22:58:38.852087       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1001 22:58:39.553141       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.10.162"}
	I1001 22:59:09.222562       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1001 22:59:12.619387       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1001 22:59:12.625270       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1001 22:59:12.631150       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1001 22:59:17.269202       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1001 22:59:18.290368       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1001 22:59:22.735439       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1001 22:59:22.942054       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.45.72"}
	E1001 22:59:27.631536       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1001 22:59:28.301512       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:59:28.301560       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:59:28.314793       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:59:28.314834       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:59:28.315644       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:59:28.355369       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:59:28.355420       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:59:28.441304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:59:28.441343       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1001 22:59:29.332922       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1001 22:59:29.441654       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1001 22:59:29.549574       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1001 23:01:41.154041       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.51.148"}
	
	
	==> kube-controller-manager [7c7968828a881c96006260108deb7202011614b58887139d73fafd5649cc0101] <==
	E1001 23:01:53.721635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1001 23:01:55.451151       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W1001 23:01:57.966877       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:01:57.966932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1001 23:02:06.521671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-003557"
	W1001 23:02:17.175193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:02:17.175235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:02:25.505365       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:02:25.505404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:02:30.164845       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:02:30.164887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:02:49.545945       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:02:49.545989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:03:16.693448       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:03:16.693500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:03:19.238105       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:03:19.238144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:03:23.941719       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:03:23.941757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:03:35.628673       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:03:35.628713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:03:48.314377       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:03:48.314418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:04:06.209271       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:04:06.209320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [1dd0b703f1047d60ae94e00da67cbb3f6e2c52d5094ae6dfa83f94e7e6d437c1] <==
	I1001 22:48:05.557957       1 server_linux.go:66] "Using iptables proxy"
	I1001 22:48:06.935938       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1001 22:48:06.936105       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 22:48:08.335516       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1001 22:48:08.335603       1 server_linux.go:169] "Using iptables Proxier"
	I1001 22:48:08.439322       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 22:48:08.442907       1 server.go:483] "Version info" version="v1.31.1"
	I1001 22:48:08.442943       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 22:48:08.444401       1 config.go:199] "Starting service config controller"
	I1001 22:48:08.445143       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 22:48:08.444927       1 config.go:328] "Starting node config controller"
	I1001 22:48:08.445294       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 22:48:08.444426       1 config.go:105] "Starting endpoint slice config controller"
	I1001 22:48:08.445345       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 22:48:08.546666       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 22:48:08.549131       1 shared_informer.go:320] Caches are synced for service config
	I1001 22:48:08.549158       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8cfd176ea2dd210be7be65683196956232de3de278cdc386080914da1db689ca] <==
	E1001 22:47:56.636501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:56.636228       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 22:47:56.636533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:56.636224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 22:47:56.636538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1001 22:47:56.636567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:56.636282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 22:47:56.636953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:56.637015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 22:47:56.637054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:56.637119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 22:47:56.637156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.472310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 22:47:57.472345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.503661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 22:47:57.503697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.515984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 22:47:57.516025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.584924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 22:47:57.584969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.636284       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 22:47:57.636323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:47:57.643587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 22:47:57.643635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 22:47:57.959820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 23:02:18 addons-003557 kubelet[1628]: E1001 23:02:18.912164    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823738911848304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:28 addons-003557 kubelet[1628]: E1001 23:02:28.914297    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823748914052867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:28 addons-003557 kubelet[1628]: E1001 23:02:28.914351    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823748914052867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:38 addons-003557 kubelet[1628]: E1001 23:02:38.917334    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823758917064306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:38 addons-003557 kubelet[1628]: E1001 23:02:38.917375    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823758917064306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:48 addons-003557 kubelet[1628]: E1001 23:02:48.919702    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823768919464891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:48 addons-003557 kubelet[1628]: E1001 23:02:48.919734    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823768919464891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:58 addons-003557 kubelet[1628]: E1001 23:02:58.668064    1628 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff, memory: /docker/e707a4e961c156489a8c539eea2f10763ebfd9f41c5a68d4ec2601690ecf5fff/system.slice/kubelet.service"
	Oct 01 23:02:58 addons-003557 kubelet[1628]: E1001 23:02:58.922868    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823778922588910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:58 addons-003557 kubelet[1628]: E1001 23:02:58.922917    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823778922588910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:08 addons-003557 kubelet[1628]: E1001 23:03:08.925541    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823788925270731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:08 addons-003557 kubelet[1628]: E1001 23:03:08.925573    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823788925270731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:18 addons-003557 kubelet[1628]: E1001 23:03:18.927427    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823798927180033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:18 addons-003557 kubelet[1628]: E1001 23:03:18.927468    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823798927180033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:28 addons-003557 kubelet[1628]: E1001 23:03:28.929923    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823808929704438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:28 addons-003557 kubelet[1628]: E1001 23:03:28.929974    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823808929704438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:38 addons-003557 kubelet[1628]: I1001 23:03:38.641421    1628 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 23:03:38 addons-003557 kubelet[1628]: E1001 23:03:38.932072    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823818931863529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:38 addons-003557 kubelet[1628]: E1001 23:03:38.932104    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823818931863529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:48 addons-003557 kubelet[1628]: E1001 23:03:48.934315    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823828934082651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:48 addons-003557 kubelet[1628]: E1001 23:03:48.934348    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823828934082651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:58 addons-003557 kubelet[1628]: E1001 23:03:58.936984    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823838936748320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:58 addons-003557 kubelet[1628]: E1001 23:03:58.937022    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823838936748320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:04:08 addons-003557 kubelet[1628]: E1001 23:04:08.940769    1628 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823848940421763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:04:08 addons-003557 kubelet[1628]: E1001 23:04:08.940812    1628 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823848940421763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593338,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f56a3f3f14126cacab4cbd02cc73c99c7bcb1128d9431c0d38e1a34f2c686815] <==
	I1001 22:48:47.673165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 22:48:47.681511       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 22:48:47.681566       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 22:48:47.688349       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 22:48:47.688487       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-003557_0a79a27c-37bb-4dc1-86a9-f726cc75b962!
	I1001 22:48:47.688428       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bee9eb1d-e089-411a-bb70-9303bb4633b2", APIVersion:"v1", ResourceVersion:"912", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-003557_0a79a27c-37bb-4dc1-86a9-f726cc75b962 became leader
	I1001 22:48:47.789023       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-003557_0a79a27c-37bb-4dc1-86a9-f726cc75b962!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-003557 -n addons-003557
helpers_test.go:261: (dbg) Run:  kubectl --context addons-003557 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (331.44s)

                                                
                                    

Test pass (300/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.2
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 8.06
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.06
21 TestBinaryMirror 0.73
22 TestOffline 87.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 185.59
31 TestAddons/serial/GCPAuth/Namespaces 0.12
33 TestAddons/parallel/Registry 14.31
35 TestAddons/parallel/InspektorGadget 11.67
38 TestAddons/parallel/CSI 56.41
39 TestAddons/parallel/Headlamp 17.47
40 TestAddons/parallel/CloudSpanner 5.51
41 TestAddons/parallel/LocalPath 52.95
42 TestAddons/parallel/NvidiaDevicePlugin 5.53
43 TestAddons/parallel/Yakd 12.14
44 TestAddons/StoppedEnableDisable 12.02
45 TestCertOptions 28.66
46 TestCertExpiration 220.26
48 TestForceSystemdFlag 29.27
49 TestForceSystemdEnv 32.79
51 TestKVMDriverInstallOrUpdate 3.39
55 TestErrorSpam/setup 20.06
56 TestErrorSpam/start 0.53
57 TestErrorSpam/status 0.85
58 TestErrorSpam/pause 1.5
59 TestErrorSpam/unpause 1.65
60 TestErrorSpam/stop 1.35
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 40.2
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 28.09
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.16
72 TestFunctional/serial/CacheCmd/cache/add_local 1.36
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
77 TestFunctional/serial/CacheCmd/cache/delete 0.1
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 38.56
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.35
83 TestFunctional/serial/LogsFileCmd 1.39
84 TestFunctional/serial/InvalidService 4.52
86 TestFunctional/parallel/ConfigCmd 0.32
87 TestFunctional/parallel/DashboardCmd 11.18
88 TestFunctional/parallel/DryRun 0.34
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 0.91
94 TestFunctional/parallel/ServiceCmdConnect 7.67
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 30.39
98 TestFunctional/parallel/SSHCmd 0.51
99 TestFunctional/parallel/CpCmd 1.99
100 TestFunctional/parallel/MySQL 20.77
101 TestFunctional/parallel/FileSync 0.39
102 TestFunctional/parallel/CertSync 1.63
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
110 TestFunctional/parallel/License 0.18
111 TestFunctional/parallel/ServiceCmd/DeployApp 9.18
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.3
117 TestFunctional/parallel/Version/short 0.05
118 TestFunctional/parallel/Version/components 0.74
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.5
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
123 TestFunctional/parallel/ImageCommands/ImageBuild 3.24
124 TestFunctional/parallel/ImageCommands/Setup 1.02
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.24
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.67
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.77
130 TestFunctional/parallel/ServiceCmd/List 0.51
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.98
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
134 TestFunctional/parallel/ServiceCmd/Format 0.36
135 TestFunctional/parallel/ServiceCmd/URL 0.33
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
138 TestFunctional/parallel/ProfileCmd/profile_list 0.42
139 TestFunctional/parallel/MountCmd/any-port 6.59
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
150 TestFunctional/parallel/MountCmd/specific-port 1.76
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 151.7
159 TestMultiControlPlane/serial/DeployApp 4.1
160 TestMultiControlPlane/serial/PingHostFromPods 1
161 TestMultiControlPlane/serial/AddWorkerNode 30.15
162 TestMultiControlPlane/serial/NodeLabels 0.06
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
164 TestMultiControlPlane/serial/CopyFile 15.53
165 TestMultiControlPlane/serial/StopSecondaryNode 12.46
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
167 TestMultiControlPlane/serial/RestartSecondaryNode 30.46
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 176.03
170 TestMultiControlPlane/serial/DeleteSecondaryNode 12.08
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
172 TestMultiControlPlane/serial/StopCluster 35.48
173 TestMultiControlPlane/serial/RestartCluster 68.24
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
175 TestMultiControlPlane/serial/AddSecondaryNode 66.38
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
180 TestJSONOutput/start/Command 71.02
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.68
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.58
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.77
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
205 TestKicCustomNetwork/create_custom_network 26.84
206 TestKicCustomNetwork/use_default_bridge_network 26.34
207 TestKicExistingNetwork 23.44
208 TestKicCustomSubnet 23.3
209 TestKicStaticIP 26.24
210 TestMainNoArgs 0.04
211 TestMinikubeProfile 51.41
214 TestMountStart/serial/StartWithMountFirst 8.19
215 TestMountStart/serial/VerifyMountFirst 0.24
216 TestMountStart/serial/StartWithMountSecond 8.21
217 TestMountStart/serial/VerifyMountSecond 0.24
218 TestMountStart/serial/DeleteFirst 1.61
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.17
221 TestMountStart/serial/RestartStopped 7.22
222 TestMountStart/serial/VerifyMountPostStop 0.25
225 TestMultiNode/serial/FreshStart2Nodes 65.23
226 TestMultiNode/serial/DeployApp2Nodes 16.92
227 TestMultiNode/serial/PingHostFrom2Pods 0.72
228 TestMultiNode/serial/AddNode 24.51
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.61
231 TestMultiNode/serial/CopyFile 8.93
232 TestMultiNode/serial/StopNode 2.1
233 TestMultiNode/serial/StartAfterStop 9.01
234 TestMultiNode/serial/RestartKeepsNodes 108.5
235 TestMultiNode/serial/DeleteNode 5.24
236 TestMultiNode/serial/StopMultiNode 23.73
237 TestMultiNode/serial/RestartMultiNode 60.2
238 TestMultiNode/serial/ValidateNameConflict 25.7
243 TestPreload 104.9
245 TestScheduledStopUnix 99.27
248 TestInsufficientStorage 9.79
249 TestRunningBinaryUpgrade 135.59
251 TestKubernetesUpgrade 348.06
252 TestMissingContainerUpgrade 132.11
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
255 TestNoKubernetes/serial/StartWithK8s 33.2
256 TestNoKubernetes/serial/StartWithStopK8s 12.16
257 TestNoKubernetes/serial/Start 7.4
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
259 TestNoKubernetes/serial/ProfileList 7.07
260 TestNoKubernetes/serial/Stop 1.2
261 TestNoKubernetes/serial/StartNoArgs 6.53
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
263 TestStoppedBinaryUpgrade/Setup 0.42
264 TestStoppedBinaryUpgrade/Upgrade 79.93
272 TestNetworkPlugins/group/false 3.33
276 TestStoppedBinaryUpgrade/MinikubeLogs 2.97
285 TestPause/serial/Start 42.69
286 TestNetworkPlugins/group/auto/Start 39.38
287 TestPause/serial/SecondStartNoReconfiguration 29.28
288 TestNetworkPlugins/group/auto/KubeletFlags 0.25
289 TestNetworkPlugins/group/auto/NetCatPod 10.2
290 TestNetworkPlugins/group/auto/DNS 0.14
291 TestNetworkPlugins/group/auto/Localhost 0.12
292 TestNetworkPlugins/group/auto/HairPin 0.12
293 TestPause/serial/Pause 0.73
294 TestPause/serial/VerifyStatus 0.31
295 TestPause/serial/Unpause 0.68
296 TestPause/serial/PauseAgain 0.71
297 TestPause/serial/DeletePaused 2.8
298 TestNetworkPlugins/group/kindnet/Start 42.21
299 TestPause/serial/VerifyDeletedResources 0.58
300 TestNetworkPlugins/group/calico/Start 56.38
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
303 TestNetworkPlugins/group/kindnet/NetCatPod 10.17
304 TestNetworkPlugins/group/kindnet/DNS 0.13
305 TestNetworkPlugins/group/calico/ControllerPod 6.01
306 TestNetworkPlugins/group/kindnet/Localhost 0.1
307 TestNetworkPlugins/group/kindnet/HairPin 0.1
308 TestNetworkPlugins/group/calico/KubeletFlags 0.26
309 TestNetworkPlugins/group/calico/NetCatPod 10.22
310 TestNetworkPlugins/group/calico/DNS 0.13
311 TestNetworkPlugins/group/calico/Localhost 0.12
312 TestNetworkPlugins/group/calico/HairPin 0.12
313 TestNetworkPlugins/group/custom-flannel/Start 49.62
314 TestNetworkPlugins/group/enable-default-cni/Start 65.05
315 TestNetworkPlugins/group/flannel/Start 47.69
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.18
318 TestNetworkPlugins/group/custom-flannel/DNS 0.13
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
321 TestNetworkPlugins/group/bridge/Start 67.47
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
324 TestNetworkPlugins/group/flannel/NetCatPod 11.2
326 TestStartStop/group/old-k8s-version/serial/FirstStart 129.76
327 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
328 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.22
329 TestNetworkPlugins/group/flannel/DNS 0.16
330 TestNetworkPlugins/group/flannel/Localhost 0.14
331 TestNetworkPlugins/group/flannel/HairPin 0.13
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
336 TestStartStop/group/no-preload/serial/FirstStart 55.51
338 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.49
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
340 TestNetworkPlugins/group/bridge/NetCatPod 10.25
341 TestNetworkPlugins/group/bridge/DNS 0.13
342 TestNetworkPlugins/group/bridge/Localhost 0.1
343 TestNetworkPlugins/group/bridge/HairPin 0.1
344 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.29
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
347 TestStartStop/group/newest-cni/serial/FirstStart 28.12
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.92
349 TestStartStop/group/no-preload/serial/DeployApp 9.23
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
351 TestStartStop/group/no-preload/serial/Stop 11.98
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
353 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 277.36
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
355 TestStartStop/group/no-preload/serial/SecondStart 263.3
356 TestStartStop/group/newest-cni/serial/DeployApp 0
357 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
358 TestStartStop/group/newest-cni/serial/Stop 1.24
359 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
360 TestStartStop/group/newest-cni/serial/SecondStart 14.73
361 TestStartStop/group/old-k8s-version/serial/DeployApp 8.5
362 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
365 TestStartStop/group/newest-cni/serial/Pause 2.98
367 TestStartStop/group/embed-certs/serial/FirstStart 39.69
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
369 TestStartStop/group/old-k8s-version/serial/Stop 12.04
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
371 TestStartStop/group/old-k8s-version/serial/SecondStart 143.07
372 TestStartStop/group/embed-certs/serial/DeployApp 8.24
373 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
374 TestStartStop/group/embed-certs/serial/Stop 12.43
375 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
376 TestStartStop/group/embed-certs/serial/SecondStart 262.54
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
379 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
380 TestStartStop/group/old-k8s-version/serial/Pause 2.57
381 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
384 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
386 TestStartStop/group/no-preload/serial/Pause 2.57
387 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
388 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.71
389 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
392 TestStartStop/group/embed-certs/serial/Pause 2.6
x
+
TestDownloadOnly/v1.20.0/json-events (5.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-195979 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-195979 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.201328964s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1001 22:47:10.980928   16095 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1001 22:47:10.981014   16095 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-195979
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-195979: exit status 85 (56.233091ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-195979 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |          |
	|         | -p download-only-195979        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 22:47:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 22:47:05.817468   16107 out.go:345] Setting OutFile to fd 1 ...
	I1001 22:47:05.817706   16107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:05.817714   16107 out.go:358] Setting ErrFile to fd 2...
	I1001 22:47:05.817718   16107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:05.817888   16107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	W1001 22:47:05.818011   16107 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19740-9314/.minikube/config/config.json: open /home/jenkins/minikube-integration/19740-9314/.minikube/config/config.json: no such file or directory
	I1001 22:47:05.818567   16107 out.go:352] Setting JSON to true
	I1001 22:47:05.819440   16107 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1773,"bootTime":1727821053,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 22:47:05.819545   16107 start.go:139] virtualization: kvm guest
	I1001 22:47:05.822091   16107 out.go:97] [download-only-195979] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1001 22:47:05.822208   16107 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 22:47:05.822253   16107 notify.go:220] Checking for updates...
	I1001 22:47:05.823583   16107 out.go:169] MINIKUBE_LOCATION=19740
	I1001 22:47:05.824922   16107 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 22:47:05.826194   16107 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	I1001 22:47:05.827399   16107 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	I1001 22:47:05.828678   16107 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1001 22:47:05.831048   16107 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 22:47:05.831281   16107 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 22:47:05.853853   16107 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 22:47:05.853956   16107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 22:47:06.221621   16107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-10-01 22:47:06.212188098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 22:47:06.221732   16107 docker.go:318] overlay module found
	I1001 22:47:06.223407   16107 out.go:97] Using the docker driver based on user configuration
	I1001 22:47:06.223434   16107 start.go:297] selected driver: docker
	I1001 22:47:06.223443   16107 start.go:901] validating driver "docker" against <nil>
	I1001 22:47:06.223557   16107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 22:47:06.270967   16107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-10-01 22:47:06.261983943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 22:47:06.271136   16107 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 22:47:06.271661   16107 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1001 22:47:06.271815   16107 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 22:47:06.273869   16107 out.go:169] Using Docker driver with root privileges
	I1001 22:47:06.275364   16107 cni.go:84] Creating CNI manager for ""
	I1001 22:47:06.275427   16107 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 22:47:06.275439   16107 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 22:47:06.275541   16107 start.go:340] cluster config:
	{Name:download-only-195979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-195979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:06.276937   16107 out.go:97] Starting "download-only-195979" primary control-plane node in "download-only-195979" cluster
	I1001 22:47:06.276963   16107 cache.go:121] Beginning downloading kic base image for docker with crio
	I1001 22:47:06.278158   16107 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1001 22:47:06.278183   16107 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 22:47:06.278294   16107 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 22:47:06.295061   16107 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 22:47:06.295224   16107 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 22:47:06.295326   16107 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 22:47:06.301732   16107 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:06.301760   16107 cache.go:56] Caching tarball of preloaded images
	I1001 22:47:06.301887   16107 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 22:47:06.303668   16107 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1001 22:47:06.303683   16107 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1001 22:47:06.333488   16107 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:09.390806   16107 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1001 22:47:09.390907   16107 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1001 22:47:09.593151   16107 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1001 22:47:10.336213   16107 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1001 22:47:10.336539   16107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/download-only-195979/config.json ...
	I1001 22:47:10.336568   16107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/download-only-195979/config.json: {Name:mk8982911bb617b77c8ee036e7b16eccfe24b0b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:10.336772   16107 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 22:47:10.336972   16107 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19740-9314/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-195979 host does not exist
	  To start a cluster, run: "minikube start -p download-only-195979"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-195979
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-179949 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-179949 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.063931047s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1001 22:47:19.430508   16095 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1001 22:47:19.430550   16095 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-179949
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-179949: exit status 85 (61.256234ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-195979 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | -p download-only-195979        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| delete  | -p download-only-195979        | download-only-195979 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| start   | -o=json --download-only        | download-only-179949 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | -p download-only-179949        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 22:47:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 22:47:11.404471   16460 out.go:345] Setting OutFile to fd 1 ...
	I1001 22:47:11.404597   16460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:11.404607   16460 out.go:358] Setting ErrFile to fd 2...
	I1001 22:47:11.404612   16460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:11.404816   16460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	I1001 22:47:11.405444   16460 out.go:352] Setting JSON to true
	I1001 22:47:11.406324   16460 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1778,"bootTime":1727821053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 22:47:11.406421   16460 start.go:139] virtualization: kvm guest
	I1001 22:47:11.408925   16460 out.go:97] [download-only-179949] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 22:47:11.409090   16460 notify.go:220] Checking for updates...
	I1001 22:47:11.410502   16460 out.go:169] MINIKUBE_LOCATION=19740
	I1001 22:47:11.411968   16460 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 22:47:11.413338   16460 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	I1001 22:47:11.414790   16460 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	I1001 22:47:11.416263   16460 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1001 22:47:11.418776   16460 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 22:47:11.419005   16460 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 22:47:11.440031   16460 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 22:47:11.440107   16460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 22:47:11.485695   16460 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-10-01 22:47:11.476700967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 22:47:11.485791   16460 docker.go:318] overlay module found
	I1001 22:47:11.487614   16460 out.go:97] Using the docker driver based on user configuration
	I1001 22:47:11.487642   16460 start.go:297] selected driver: docker
	I1001 22:47:11.487648   16460 start.go:901] validating driver "docker" against <nil>
	I1001 22:47:11.487731   16460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 22:47:11.537017   16460 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-10-01 22:47:11.526814697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 22:47:11.537157   16460 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 22:47:11.537646   16460 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1001 22:47:11.537789   16460 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 22:47:11.539500   16460 out.go:169] Using Docker driver with root privileges
	I1001 22:47:11.540615   16460 cni.go:84] Creating CNI manager for ""
	I1001 22:47:11.540690   16460 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 22:47:11.540706   16460 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 22:47:11.540791   16460 start.go:340] cluster config:
	{Name:download-only-179949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-179949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:11.542109   16460 out.go:97] Starting "download-only-179949" primary control-plane node in "download-only-179949" cluster
	I1001 22:47:11.542135   16460 cache.go:121] Beginning downloading kic base image for docker with crio
	I1001 22:47:11.543617   16460 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1001 22:47:11.543645   16460 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:11.543762   16460 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 22:47:11.559123   16460 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 22:47:11.559231   16460 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 22:47:11.559247   16460 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1001 22:47:11.559251   16460 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1001 22:47:11.559261   16460 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1001 22:47:11.572363   16460 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:11.572386   16460 cache.go:56] Caching tarball of preloaded images
	I1001 22:47:11.572532   16460 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:11.574281   16460 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1001 22:47:11.574301   16460 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1001 22:47:11.600386   16460 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19740-9314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-179949 host does not exist
	  To start a cluster, run: "minikube start -p download-only-179949"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-179949
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.06s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-848534 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-848534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-848534
--- PASS: TestDownloadOnlyKic (1.06s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
I1001 22:47:21.143941   16095 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-560533 --alsologtostderr --binary-mirror http://127.0.0.1:36859 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-560533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-560533
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (87.56s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-319732 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-319732 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m23.678350413s)
helpers_test.go:175: Cleaning up "offline-crio-319732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-319732
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-319732: (3.878110298s)
--- PASS: TestOffline (87.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:932: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-003557
addons_test.go:932: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-003557: exit status 85 (48.000017ms)

                                                
                                                
-- stdout --
	* Profile "addons-003557" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-003557"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:943: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-003557
addons_test.go:943: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-003557: exit status 85 (52.73867ms)

                                                
                                                
-- stdout --
	* Profile "addons-003557" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-003557"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (185.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-003557 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-003557 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m5.59433939s)
--- PASS: TestAddons/Setup (185.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-003557 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-003557 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.395396ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-nfhms" [6ea2ddd1-36cb-436c-8115-e19051d864b9] Running
I1001 22:58:38.873175   16095 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1001 22:58:38.873198   16095 kapi.go:107] duration metric: took 24.037495ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003821734s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b56zl" [927b1333-9d83-4da6-a33d-da374985f3f3] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002860442s
addons_test.go:331: (dbg) Run:  kubectl --context addons-003557 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-003557 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-003557 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.555469465s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 ip
2024/10/01 22:58:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.31s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5485g" [c8f508a6-9084-4140-a5f3-5bb3eef05f6c] Running
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004426213s
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-003557 addons disable inspektor-gadget --alsologtostderr -v=1: (5.661160425s)
--- PASS: TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1001 22:58:38.849181   16095 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 24.046775ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-003557 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-003557 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [71063454-5030-4a9e-ac16-bcc08824b957] Pending
helpers_test.go:344: "task-pv-pod" [71063454-5030-4a9e-ac16-bcc08824b957] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [71063454-5030-4a9e-ac16-bcc08824b957] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004106766s
addons_test.go:511: (dbg) Run:  kubectl --context addons-003557 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-003557 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-003557 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-003557 delete pod task-pv-pod: (1.198649227s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-003557 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-003557 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-003557 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [92efad93-7972-47ea-b7b0-be4f6240c386] Pending
helpers_test.go:344: "task-pv-pod-restore" [92efad93-7972-47ea-b7b0-be4f6240c386] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [92efad93-7972-47ea-b7b0-be4f6240c386] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00334981s
addons_test.go:553: (dbg) Run:  kubectl --context addons-003557 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-003557 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-003557 delete volumesnapshot new-snapshot-demo
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-003557 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.536374486s)
--- PASS: TestAddons/parallel/CSI (56.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:741: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-003557 --alsologtostderr -v=1
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-fh766" [dc014f72-81b3-4d8b-a885-45ff65433c26] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-fh766" [dc014f72-81b3-4d8b-a885-45ff65433c26] Running
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003096329s
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable headlamp --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-003557 addons disable headlamp --alsologtostderr -v=1: (5.717098608s)
--- PASS: TestAddons/parallel/Headlamp (17.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-5r47l" [6eb93b5e-0363-4564-8aa4-573e82fcbf50] Running
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004191752s
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:881: (dbg) Run:  kubectl --context addons-003557 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:887: (dbg) Run:  kubectl --context addons-003557 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:891: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-003557 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [434b7fd1-6445-464a-b8b5-876aaec6423d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [434b7fd1-6445-464a-b8b5-876aaec6423d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [434b7fd1-6445-464a-b8b5-876aaec6423d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003178074s
addons_test.go:899: (dbg) Run:  kubectl --context addons-003557 get pvc test-pvc -o=json
addons_test.go:908: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 ssh "cat /opt/local-path-provisioner/pvc-0e7d7921-5349-40a6-8079-5946d984cc77_default_test-pvc/file1"
addons_test.go:920: (dbg) Run:  kubectl --context addons-003557 delete pod test-local-path
addons_test.go:924: (dbg) Run:  kubectl --context addons-003557 delete pvc test-pvc
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-003557 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.091548854s)
--- PASS: TestAddons/parallel/LocalPath (52.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lqq7d" [96398da4-ed1b-465f-b551-4e9610a5a0b8] Running
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00367501s
addons_test.go:959: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-003557
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-jplfg" [04e29ef7-bc6e-4b9c-a5a8-51203e9398d7] Running
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003533647s
addons_test.go:971: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable yakd --alsologtostderr -v=1
addons_test.go:971: (dbg) Done: out/minikube-linux-amd64 -p addons-003557 addons disable yakd --alsologtostderr -v=1: (6.131571019s)
--- PASS: TestAddons/parallel/Yakd (12.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.02s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-003557
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-003557: (11.786817s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-003557
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-003557
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-003557
--- PASS: TestAddons/StoppedEnableDisable (12.02s)

                                                
                                    
x
+
TestCertOptions (28.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-752061 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-752061 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.984298374s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-752061 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-752061 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-752061 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-752061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-752061
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-752061: (1.917829837s)
--- PASS: TestCertOptions (28.66s)

                                                
                                    
x
+
TestCertExpiration (220.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-669285 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-669285 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.647820226s)
E1001 23:35:28.086905   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-669285 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-669285 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.93869392s)
helpers_test.go:175: Cleaning up "cert-expiration-669285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-669285
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-669285: (2.668167886s)
--- PASS: TestCertExpiration (220.26s)

                                                
                                    
x
+
TestForceSystemdFlag (29.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-425845 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-425845 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.66620357s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-425845 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-425845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-425845
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-425845: (2.323958629s)
--- PASS: TestForceSystemdFlag (29.27s)

                                                
                                    
x
+
TestForceSystemdEnv (32.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-622517 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-622517 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.306685798s)
helpers_test.go:175: Cleaning up "force-systemd-env-622517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-622517
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-622517: (2.478878869s)
--- PASS: TestForceSystemdEnv (32.79s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.39s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1001 23:34:50.302762   16095 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1001 23:34:50.302918   16095 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1001 23:34:50.338956   16095 install.go:62] docker-machine-driver-kvm2: exit status 1
W1001 23:34:50.339287   16095 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1001 23:34:50.339345   16095 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2104545152/001/docker-machine-driver-kvm2
I1001 23:34:50.611021   16095 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2104545152/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc000014c10 gz:0xc000014c18 tar:0xc0000143f0 tar.bz2:0xc0000148c0 tar.gz:0xc000014990 tar.xz:0xc000014bc0 tar.zst:0xc000014c00 tbz2:0xc0000148c0 tgz:0xc000014990 txz:0xc000014bc0 tzst:0xc000014c00 xz:0xc000014c20 zip:0xc000014c40 zst:0xc000014c28] Getters:map[file:0xc0013fa390 http:0xc0005d94f0 https:0xc0005d9540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1001 23:34:50.611401   16095 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2104545152/001/docker-machine-driver-kvm2
I1001 23:34:52.184764   16095 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1001 23:34:52.184847   16095 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1001 23:34:52.217140   16095 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1001 23:34:52.217182   16095 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1001 23:34:52.217272   16095 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1001 23:34:52.217308   16095 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2104545152/002/docker-machine-driver-kvm2
I1001 23:34:52.380410   16095 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2104545152/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc000014c10 gz:0xc000014c18 tar:0xc0000143f0 tar.bz2:0xc0000148c0 tar.gz:0xc000014990 tar.xz:0xc000014bc0 tar.zst:0xc000014c00 tbz2:0xc0000148c0 tgz:0xc000014990 txz:0xc000014bc0 tzst:0xc000014c00 xz:0xc000014c20 zip:0xc000014c40 zst:0xc000014c28] Getters:map[file:0xc00066e610 http:0xc000629d60 https:0xc000629db0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1001 23:34:52.380458   16095 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2104545152/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.39s)

                                                
                                    
x
+
TestErrorSpam/setup (20.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-205775 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-205775 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-205775 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-205775 --driver=docker  --container-runtime=crio: (20.055617116s)
--- PASS: TestErrorSpam/setup (20.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 start --dry-run
--- PASS: TestErrorSpam/start (0.53s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 stop: (1.181744444s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-205775 --log_dir /tmp/nospam-205775 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19740-9314/.minikube/files/etc/test/nested/copy/16095/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-077195 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1001 23:05:28.087447   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:28.093830   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:28.105198   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:28.126624   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:28.168734   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:28.250123   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:28.411671   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:28.733335   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:29.375426   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:30.656829   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:33.219755   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-077195 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (40.200244622s)
--- PASS: TestFunctional/serial/StartWithProxy (40.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1001 23:05:38.306685   16095 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-077195 --alsologtostderr -v=8
E1001 23:05:38.342126   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:05:48.584210   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-077195 --alsologtostderr -v=8: (28.083896922s)
functional_test.go:663: soft start took 28.084785528s for "functional-077195" cluster.
I1001 23:06:06.391058   16095 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (28.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-077195 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-077195 cache add registry.k8s.io/pause:3.3: (1.175012586s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 cache add registry.k8s.io/pause:latest
E1001 23:06:09.065906   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-077195 cache add registry.k8s.io/pause:latest: (1.047036304s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-077195 /tmp/TestFunctionalserialCacheCmdcacheadd_local2093288622/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 cache add minikube-local-cache-test:functional-077195
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-077195 cache add minikube-local-cache-test:functional-077195: (1.018882143s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 cache delete minikube-local-cache-test:functional-077195
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-077195
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-077195 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (266.247885ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 kubectl -- --context functional-077195 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-077195 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-077195 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1001 23:06:50.027701   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-077195 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.556911015s)
functional_test.go:761: restart took 38.557056242s for "functional-077195" cluster.
I1001 23:06:51.925021   16095 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-077195 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-077195 logs: (1.351895423s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 logs --file /tmp/TestFunctionalserialLogsFileCmd2827083139/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-077195 logs --file /tmp/TestFunctionalserialLogsFileCmd2827083139/001/logs.txt: (1.392927436s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.52s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-077195 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-077195
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-077195: exit status 115 (322.670658ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32073 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-077195 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-077195 delete -f testdata/invalidsvc.yaml: (1.013959019s)
--- PASS: TestFunctional/serial/InvalidService (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-077195 config get cpus: exit status 14 (62.084585ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-077195 config get cpus: exit status 14 (48.340064ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-077195 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-077195 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 65563: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-077195 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-077195 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (154.351323ms)

                                                
                                                
-- stdout --
	* [functional-077195] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:07:13.936813   65029 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:07:13.936925   65029 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:07:13.936930   65029 out.go:358] Setting ErrFile to fd 2...
	I1001 23:07:13.936934   65029 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:07:13.937111   65029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	I1001 23:07:13.937675   65029 out.go:352] Setting JSON to false
	I1001 23:07:13.938689   65029 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2981,"bootTime":1727821053,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:07:13.938796   65029 start.go:139] virtualization: kvm guest
	I1001 23:07:13.941288   65029 out.go:177] * [functional-077195] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:07:13.942971   65029 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:07:13.942963   65029 notify.go:220] Checking for updates...
	I1001 23:07:13.946066   65029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:07:13.947891   65029 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	I1001 23:07:13.949669   65029 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	I1001 23:07:13.951163   65029 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:07:13.952701   65029 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:07:13.954753   65029 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:07:13.955202   65029 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:07:13.982198   65029 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:07:13.982320   65029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:07:14.033679   65029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-10-01 23:07:14.023027656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 23:07:14.033817   65029 docker.go:318] overlay module found
	I1001 23:07:14.037145   65029 out.go:177] * Using the docker driver based on existing profile
	I1001 23:07:14.038892   65029 start.go:297] selected driver: docker
	I1001 23:07:14.038919   65029 start.go:901] validating driver "docker" against &{Name:functional-077195 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-077195 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:07:14.039053   65029 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:07:14.041914   65029 out.go:201] 
	W1001 23:07:14.043531   65029 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1001 23:07:14.045178   65029 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-077195 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-077195 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-077195 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (148.384574ms)

                                                
                                                
-- stdout --
	* [functional-077195] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:07:14.280317   65241 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:07:14.280457   65241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:07:14.280467   65241 out.go:358] Setting ErrFile to fd 2...
	I1001 23:07:14.280471   65241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:07:14.280799   65241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	I1001 23:07:14.281499   65241 out.go:352] Setting JSON to false
	I1001 23:07:14.282497   65241 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2981,"bootTime":1727821053,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:07:14.282621   65241 start.go:139] virtualization: kvm guest
	I1001 23:07:14.284674   65241 out.go:177] * [functional-077195] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1001 23:07:14.286770   65241 notify.go:220] Checking for updates...
	I1001 23:07:14.286792   65241 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:07:14.288315   65241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:07:14.290793   65241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	I1001 23:07:14.292589   65241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	I1001 23:07:14.293968   65241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:07:14.295355   65241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:07:14.297052   65241 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:07:14.297743   65241 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:07:14.322121   65241 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:07:14.322261   65241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:07:14.369407   65241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-10-01 23:07:14.359321142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 23:07:14.369537   65241 docker.go:318] overlay module found
	I1001 23:07:14.372731   65241 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1001 23:07:14.374306   65241 start.go:297] selected driver: docker
	I1001 23:07:14.374333   65241 start.go:901] validating driver "docker" against &{Name:functional-077195 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-077195 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:07:14.374453   65241 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:07:14.377464   65241 out.go:201] 
	W1001 23:07:14.379357   65241 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1001 23:07:14.381341   65241 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-077195 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-077195 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5xfb4" [3c36f35c-997c-4d1f-b4f7-84a98c607ba3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5xfb4" [3c36f35c-997c-4d1f-b4f7-84a98c607ba3] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005590134s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31712
functional_test.go:1675: http://192.168.49.2:31712: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5xfb4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31712
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d63a039b-c2a4-435c-92bc-d6f7c228c548] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00355679s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-077195 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-077195 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-077195 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-077195 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [114e98f2-b2f2-4584-840d-d56dc8ecc8bc] Pending
helpers_test.go:344: "sp-pod" [114e98f2-b2f2-4584-840d-d56dc8ecc8bc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [114e98f2-b2f2-4584-840d-d56dc8ecc8bc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003429737s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-077195 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-077195 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-077195 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e3f1d137-a849-4923-8cd0-0f6a8a9c0257] Pending
helpers_test.go:344: "sp-pod" [e3f1d137-a849-4923-8cd0-0f6a8a9c0257] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e3f1d137-a849-4923-8cd0-0f6a8a9c0257] Running
2024/10/01 23:07:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003709581s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-077195 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh -n functional-077195 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 cp functional-077195:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2879045029/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh -n functional-077195 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh -n functional-077195 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-077195 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-cjkcf" [f3f40c98-3c66-4792-a9f0-e6b597ffeea9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-cjkcf" [f3f40c98-3c66-4792-a9f0-e6b597ffeea9] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.003522444s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-077195 exec mysql-6cdb49bbb-cjkcf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-077195 exec mysql-6cdb49bbb-cjkcf -- mysql -ppassword -e "show databases;": exit status 1 (97.746707ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1001 23:07:40.344966   16095 retry.go:31] will retry after 1.378417791s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-077195 exec mysql-6cdb49bbb-cjkcf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16095/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo cat /etc/test/nested/copy/16095/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16095.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo cat /etc/ssl/certs/16095.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16095.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo cat /usr/share/ca-certificates/16095.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/160952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo cat /etc/ssl/certs/160952.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/160952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo cat /usr/share/ca-certificates/160952.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-077195 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-077195 ssh "sudo systemctl is-active docker": exit status 1 (270.581261ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-077195 ssh "sudo systemctl is-active containerd": exit status 1 (258.787186ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-077195 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-077195 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-w29t5" [79f01597-2bf0-4d13-85e4-5000d14e8291] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-w29t5" [79f01597-2bf0-4d13-85e4-5000d14e8291] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.008214406s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-077195 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-077195 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-077195 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-077195 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 60722: os: process already finished
helpers_test.go:508: unable to kill pid 60423: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-077195 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-077195 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f69a28ff-769f-4618-9112-322cf6c4ce68] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f69a28ff-769f-4618-9112-322cf6c4ce68] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004409136s
I1001 23:07:12.068044   16095 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-077195 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-077195
localhost/kicbase/echo-server:functional-077195
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-077195 image ls --format short --alsologtostderr:
I1001 23:07:23.048445   67651 out.go:345] Setting OutFile to fd 1 ...
I1001 23:07:23.048788   67651 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:07:23.048802   67651 out.go:358] Setting ErrFile to fd 2...
I1001 23:07:23.048808   67651 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:07:23.049027   67651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
I1001 23:07:23.049659   67651 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:07:23.049756   67651 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:07:23.050139   67651 cli_runner.go:164] Run: docker container inspect functional-077195 --format={{.State.Status}}
I1001 23:07:23.069931   67651 ssh_runner.go:195] Run: systemctl --version
I1001 23:07:23.069995   67651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-077195
I1001 23:07:23.087891   67651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/functional-077195/id_rsa Username:docker}
I1001 23:07:23.181109   67651 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-077195 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | c7b4f26a7d93f | 44.6MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/my-image                      | functional-077195  | 45ff9725c964a | 1.47MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-077195  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-077195  | 62ef6052c4130 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| docker.io/library/nginx                 | latest             | 9527c0f683c3b | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-077195 image ls --format table --alsologtostderr:
I1001 23:07:26.105846   68288 out.go:345] Setting OutFile to fd 1 ...
I1001 23:07:26.105949   68288 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:07:26.105954   68288 out.go:358] Setting ErrFile to fd 2...
I1001 23:07:26.105957   68288 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:07:26.106135   68288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
I1001 23:07:26.106720   68288 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:07:26.106818   68288 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:07:26.107170   68288 cli_runner.go:164] Run: docker container inspect functional-077195 --format={{.State.Status}}
I1001 23:07:26.125110   68288 ssh_runner.go:195] Run: systemctl --version
I1001 23:07:26.125164   68288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-077195
I1001 23:07:26.146298   68288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/functional-077195/id_rsa Username:docker}
I1001 23:07:26.338895   68288 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-077195 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":["docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44647101"},{"id":"45ff9725c964a4edbdf3d78c4fe8a75f4ab4bdeb2
a289b3a15467fab90028df7","repoDigests":["localhost/my-image@sha256:1a91e122d897bed2e84263d3e8a8e491ee5a706f02543b4ba485f2a8d4a973db"],"repoTags":["localhost/my-image:functional-077195"],"size":"1468193"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDig
ests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"62ef6052c4130da727eb2ba40c02375966fcfcc869460b4afdce2ecdc373c889","repoDigests":["localhost/mini
kube-local-cache-test@sha256:5b2010e20a04d1934351ea8488ae8a8d2d3fbcd6949424f1a3b37a7ade9df538"],"repoTags":["localhost/minikube-local-cache-test:functional-077195"],"size":"3330"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256
:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd","repoDigests":["docker.io/library/nginx@sha256:10b61fc3d8262c8bf44c89aef3d81202ce12b8cba12fff2e32ca5978a2d88c2b","docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853881"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-min
ikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-077195"],"size":"4943877"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k
8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"1b73b0f13064db539b0b674e3ddd57da8c45d944754286fb3c7daa787169cf5b","repoDigests":["docker.io/library/84e9daa4c4ab2970081c5b6e2cbeee187b3802fc60698c3faa9e09f4fddd1190-tmp@sha256:8097cc994575c4f57ff76da3256441818a497d7eac37548f8fb027e7bca406c8"],"repoTags":[],"size":"1465612"},{"id":"
56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-077195 image ls --format json --alsologtostderr:
I1001 23:07:25.604793   68229 out.go:345] Setting OutFile to fd 1 ...
I1001 23:07:25.604924   68229 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:07:25.604935   68229 out.go:358] Setting ErrFile to fd 2...
I1001 23:07:25.604941   68229 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:07:25.605176   68229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
I1001 23:07:25.605793   68229 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:07:25.605914   68229 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:07:25.606317   68229 cli_runner.go:164] Run: docker container inspect functional-077195 --format={{.State.Status}}
I1001 23:07:25.624116   68229 ssh_runner.go:195] Run: systemctl --version
I1001 23:07:25.624170   68229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-077195
I1001 23:07:25.650669   68229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/functional-077195/id_rsa Username:docker}
I1001 23:07:25.884941   68229 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-077195 image ls --format yaml --alsologtostderr:
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd
repoDigests:
- docker.io/library/nginx@sha256:10b61fc3d8262c8bf44c89aef3d81202ce12b8cba12fff2e32ca5978a2d88c2b
- docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb
repoTags:
- docker.io/library/nginx:latest
size: "191853881"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 62ef6052c4130da727eb2ba40c02375966fcfcc869460b4afdce2ecdc373c889
repoDigests:
- localhost/minikube-local-cache-test@sha256:5b2010e20a04d1934351ea8488ae8a8d2d3fbcd6949424f1a3b37a7ade9df538
repoTags:
- localhost/minikube-local-cache-test:functional-077195
size: "3330"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests:
- docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "44647101"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-077195
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-077195 image ls --format yaml --alsologtostderr:
I1001 23:07:23.265456   67765 out.go:345] Setting OutFile to fd 1 ...
I1001 23:07:23.265578   67765 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:07:23.265588   67765 out.go:358] Setting ErrFile to fd 2...
I1001 23:07:23.265592   67765 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:07:23.265815   67765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
I1001 23:07:23.266548   67765 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:07:23.266706   67765 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:07:23.267119   67765 cli_runner.go:164] Run: docker container inspect functional-077195 --format={{.State.Status}}
I1001 23:07:23.284284   67765 ssh_runner.go:195] Run: systemctl --version
I1001 23:07:23.284336   67765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-077195
I1001 23:07:23.302817   67765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/functional-077195/id_rsa Username:docker}
I1001 23:07:23.396897   67765 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-077195 ssh pgrep buildkitd: exit status 1 (251.136157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image build -t localhost/my-image:functional-077195 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-077195 image build -t localhost/my-image:functional-077195 testdata/build --alsologtostderr: (2.476731097s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-077195 image build -t localhost/my-image:functional-077195 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1b73b0f1306
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-077195
--> 45ff9725c96
Successfully tagged localhost/my-image:functional-077195
45ff9725c964a4edbdf3d78c4fe8a75f4ab4bdeb2a289b3a15467fab90028df7
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-077195 image build -t localhost/my-image:functional-077195 testdata/build --alsologtostderr:
I1001 23:07:23.730896   67980 out.go:345] Setting OutFile to fd 1 ...
I1001 23:07:23.731065   67980 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:07:23.731078   67980 out.go:358] Setting ErrFile to fd 2...
I1001 23:07:23.731085   67980 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:07:23.731293   67980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
I1001 23:07:23.731943   67980 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:07:23.732555   67980 config.go:182] Loaded profile config "functional-077195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:07:23.733013   67980 cli_runner.go:164] Run: docker container inspect functional-077195 --format={{.State.Status}}
I1001 23:07:23.752179   67980 ssh_runner.go:195] Run: systemctl --version
I1001 23:07:23.752264   67980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-077195
I1001 23:07:23.770149   67980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/functional-077195/id_rsa Username:docker}
I1001 23:07:23.860952   67980 build_images.go:161] Building image from path: /tmp/build.2837222152.tar
I1001 23:07:23.861019   67980 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1001 23:07:23.869340   67980 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2837222152.tar
I1001 23:07:23.872443   67980 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2837222152.tar: stat -c "%s %y" /var/lib/minikube/build/build.2837222152.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2837222152.tar': No such file or directory
I1001 23:07:23.872473   67980 ssh_runner.go:362] scp /tmp/build.2837222152.tar --> /var/lib/minikube/build/build.2837222152.tar (3072 bytes)
I1001 23:07:23.894802   67980 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2837222152
I1001 23:07:23.902917   67980 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2837222152 -xf /var/lib/minikube/build/build.2837222152.tar
I1001 23:07:23.911423   67980 crio.go:315] Building image: /var/lib/minikube/build/build.2837222152
I1001 23:07:23.911482   67980 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-077195 /var/lib/minikube/build/build.2837222152 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1001 23:07:26.066583   67980 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-077195 /var/lib/minikube/build/build.2837222152 --cgroup-manager=cgroupfs: (2.155059306s)
I1001 23:07:26.066637   67980 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2837222152
I1001 23:07:26.146427   67980 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2837222152.tar
I1001 23:07:26.157332   67980 build_images.go:217] Built localhost/my-image:functional-077195 from /tmp/build.2837222152.tar
I1001 23:07:26.157367   67980 build_images.go:133] succeeded building to: functional-077195
I1001 23:07:26.157373   67980 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-077195
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image load --daemon kicbase/echo-server:functional-077195 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-077195 image ls: (1.049252231s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image load --daemon kicbase/echo-server:functional-077195 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-077195
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image load --daemon kicbase/echo-server:functional-077195 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image save kicbase/echo-server:functional-077195 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image rm kicbase/echo-server:functional-077195 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-077195 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.746408752s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 service list -o json
functional_test.go:1494: Took "602.007562ms" to run "out/minikube-linux-amd64 -p functional-077195 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30206
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30206
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-077195
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 image save --daemon kicbase/echo-server:functional-077195 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-077195
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "366.506819ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "49.12624ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdany-port2853666167/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727824032065084910" to /tmp/TestFunctionalparallelMountCmdany-port2853666167/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727824032065084910" to /tmp/TestFunctionalparallelMountCmdany-port2853666167/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727824032065084910" to /tmp/TestFunctionalparallelMountCmdany-port2853666167/001/test-1727824032065084910
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.785167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 23:07:12.351257   16095 retry.go:31] will retry after 366.750602ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  1 23:07 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  1 23:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  1 23:07 test-1727824032065084910
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh cat /mount-9p/test-1727824032065084910
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-077195 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ceda9d72-4d7a-4289-bad8-0991ca5b3a3b] Pending
helpers_test.go:344: "busybox-mount" [ceda9d72-4d7a-4289-bad8-0991ca5b3a3b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ceda9d72-4d7a-4289-bad8-0991ca5b3a3b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ceda9d72-4d7a-4289-bad8-0991ca5b3a3b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004014514s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-077195 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdany-port2853666167/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-077195 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.137.83 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-077195 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "324.903832ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.83246ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdspecific-port2093971371/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (269.851542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 23:07:18.926054   16095 retry.go:31] will retry after 458.854282ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdspecific-port2093971371/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-077195 ssh "sudo umount -f /mount-9p": exit status 1 (276.672668ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-077195 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdspecific-port2093971371/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2056376268/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2056376268/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2056376268/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T" /mount1: exit status 1 (383.428962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 23:07:20.798770   16095 retry.go:31] will retry after 501.382176ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-077195 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-077195 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2056376268/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2056376268/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-077195 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2056376268/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-077195
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-077195
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-077195
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (151.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-724461 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1001 23:08:11.949800   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-724461 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m31.020855696s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (151.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-724461 -- rollout status deployment/busybox: (2.292775176s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-49s6j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-qqkfd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-strms -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-49s6j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-qqkfd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-strms -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-49s6j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-qqkfd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-strms -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-49s6j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-49s6j -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-qqkfd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-qqkfd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-strms -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-724461 -- exec busybox-7dff88458-strms -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-724461 -v=7 --alsologtostderr
E1001 23:10:28.087656   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-724461 -v=7 --alsologtostderr: (29.338989446s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-724461 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp testdata/cp-test.txt ha-724461:/home/docker/cp-test.txt
E1001 23:10:55.791667   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2514631069/001/cp-test_ha-724461.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461:/home/docker/cp-test.txt ha-724461-m02:/home/docker/cp-test_ha-724461_ha-724461-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m02 "sudo cat /home/docker/cp-test_ha-724461_ha-724461-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461:/home/docker/cp-test.txt ha-724461-m03:/home/docker/cp-test_ha-724461_ha-724461-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m03 "sudo cat /home/docker/cp-test_ha-724461_ha-724461-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461:/home/docker/cp-test.txt ha-724461-m04:/home/docker/cp-test_ha-724461_ha-724461-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m04 "sudo cat /home/docker/cp-test_ha-724461_ha-724461-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp testdata/cp-test.txt ha-724461-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2514631069/001/cp-test_ha-724461-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m02:/home/docker/cp-test.txt ha-724461:/home/docker/cp-test_ha-724461-m02_ha-724461.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461 "sudo cat /home/docker/cp-test_ha-724461-m02_ha-724461.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m02:/home/docker/cp-test.txt ha-724461-m03:/home/docker/cp-test_ha-724461-m02_ha-724461-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m03 "sudo cat /home/docker/cp-test_ha-724461-m02_ha-724461-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m02:/home/docker/cp-test.txt ha-724461-m04:/home/docker/cp-test_ha-724461-m02_ha-724461-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m04 "sudo cat /home/docker/cp-test_ha-724461-m02_ha-724461-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp testdata/cp-test.txt ha-724461-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2514631069/001/cp-test_ha-724461-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m03:/home/docker/cp-test.txt ha-724461:/home/docker/cp-test_ha-724461-m03_ha-724461.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461 "sudo cat /home/docker/cp-test_ha-724461-m03_ha-724461.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m03:/home/docker/cp-test.txt ha-724461-m02:/home/docker/cp-test_ha-724461-m03_ha-724461-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m02 "sudo cat /home/docker/cp-test_ha-724461-m03_ha-724461-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m03:/home/docker/cp-test.txt ha-724461-m04:/home/docker/cp-test_ha-724461-m03_ha-724461-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m04 "sudo cat /home/docker/cp-test_ha-724461-m03_ha-724461-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp testdata/cp-test.txt ha-724461-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2514631069/001/cp-test_ha-724461-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m04:/home/docker/cp-test.txt ha-724461:/home/docker/cp-test_ha-724461-m04_ha-724461.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461 "sudo cat /home/docker/cp-test_ha-724461-m04_ha-724461.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m04:/home/docker/cp-test.txt ha-724461-m02:/home/docker/cp-test_ha-724461-m04_ha-724461-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m02 "sudo cat /home/docker/cp-test_ha-724461-m04_ha-724461-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 cp ha-724461-m04:/home/docker/cp-test.txt ha-724461-m03:/home/docker/cp-test_ha-724461-m04_ha-724461-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 ssh -n ha-724461-m03 "sudo cat /home/docker/cp-test_ha-724461-m04_ha-724461-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-724461 node stop m02 -v=7 --alsologtostderr: (11.804855472s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-724461 status -v=7 --alsologtostderr: exit status 7 (655.182068ms)

                                                
                                                
-- stdout --
	ha-724461
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-724461-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-724461-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-724461-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:11:22.250416   89650 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:11:22.250671   89650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:11:22.250680   89650 out.go:358] Setting ErrFile to fd 2...
	I1001 23:11:22.250684   89650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:11:22.250882   89650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	I1001 23:11:22.251052   89650 out.go:352] Setting JSON to false
	I1001 23:11:22.251076   89650 mustload.go:65] Loading cluster: ha-724461
	I1001 23:11:22.251130   89650 notify.go:220] Checking for updates...
	I1001 23:11:22.251534   89650 config.go:182] Loaded profile config "ha-724461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:22.251560   89650 status.go:174] checking status of ha-724461 ...
	I1001 23:11:22.252117   89650 cli_runner.go:164] Run: docker container inspect ha-724461 --format={{.State.Status}}
	I1001 23:11:22.269822   89650 status.go:371] ha-724461 host status = "Running" (err=<nil>)
	I1001 23:11:22.269843   89650 host.go:66] Checking if "ha-724461" exists ...
	I1001 23:11:22.270075   89650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-724461
	I1001 23:11:22.287408   89650 host.go:66] Checking if "ha-724461" exists ...
	I1001 23:11:22.287682   89650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 23:11:22.287720   89650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-724461
	I1001 23:11:22.305003   89650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/ha-724461/id_rsa Username:docker}
	I1001 23:11:22.393923   89650 ssh_runner.go:195] Run: systemctl --version
	I1001 23:11:22.398199   89650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:11:22.409051   89650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:11:22.460127   89650 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-10-01 23:11:22.450733093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 23:11:22.460819   89650 kubeconfig.go:125] found "ha-724461" server: "https://192.168.49.254:8443"
	I1001 23:11:22.460850   89650 api_server.go:166] Checking apiserver status ...
	I1001 23:11:22.460888   89650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:11:22.472020   89650 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup
	I1001 23:11:22.481856   89650 api_server.go:182] apiserver freezer: "2:freezer:/docker/4e05979f363189e43bb9cf45daf1a373be8f28746bc96d6511ef90a47430e17a/crio/crio-c78a40f667e3786e827b3b29fdae26819c8a8660193429c1e1b149ebae53c6f4"
	I1001 23:11:22.481935   89650 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4e05979f363189e43bb9cf45daf1a373be8f28746bc96d6511ef90a47430e17a/crio/crio-c78a40f667e3786e827b3b29fdae26819c8a8660193429c1e1b149ebae53c6f4/freezer.state
	I1001 23:11:22.490225   89650 api_server.go:204] freezer state: "THAWED"
	I1001 23:11:22.490256   89650 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1001 23:11:22.494091   89650 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1001 23:11:22.494114   89650 status.go:463] ha-724461 apiserver status = Running (err=<nil>)
	I1001 23:11:22.494123   89650 status.go:176] ha-724461 status: &{Name:ha-724461 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 23:11:22.494145   89650 status.go:174] checking status of ha-724461-m02 ...
	I1001 23:11:22.494374   89650 cli_runner.go:164] Run: docker container inspect ha-724461-m02 --format={{.State.Status}}
	I1001 23:11:22.511571   89650 status.go:371] ha-724461-m02 host status = "Stopped" (err=<nil>)
	I1001 23:11:22.511593   89650 status.go:384] host is not running, skipping remaining checks
	I1001 23:11:22.511599   89650 status.go:176] ha-724461-m02 status: &{Name:ha-724461-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 23:11:22.511616   89650 status.go:174] checking status of ha-724461-m03 ...
	I1001 23:11:22.511883   89650 cli_runner.go:164] Run: docker container inspect ha-724461-m03 --format={{.State.Status}}
	I1001 23:11:22.529421   89650 status.go:371] ha-724461-m03 host status = "Running" (err=<nil>)
	I1001 23:11:22.529442   89650 host.go:66] Checking if "ha-724461-m03" exists ...
	I1001 23:11:22.529685   89650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-724461-m03
	I1001 23:11:22.548020   89650 host.go:66] Checking if "ha-724461-m03" exists ...
	I1001 23:11:22.548311   89650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 23:11:22.548369   89650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-724461-m03
	I1001 23:11:22.565470   89650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/ha-724461-m03/id_rsa Username:docker}
	I1001 23:11:22.654104   89650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:11:22.666020   89650 kubeconfig.go:125] found "ha-724461" server: "https://192.168.49.254:8443"
	I1001 23:11:22.666045   89650 api_server.go:166] Checking apiserver status ...
	I1001 23:11:22.666074   89650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:11:22.677002   89650 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1391/cgroup
	I1001 23:11:22.686687   89650 api_server.go:182] apiserver freezer: "2:freezer:/docker/e97d59eed5ad9a2ba80d52022428edb9b6c20f0cb10577818ada1e179ad56422/crio/crio-f42fe3f3c39ef3bf076908a806d56ad4f0881ac4d2bef0bcace4f6901d4e754a"
	I1001 23:11:22.686751   89650 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e97d59eed5ad9a2ba80d52022428edb9b6c20f0cb10577818ada1e179ad56422/crio/crio-f42fe3f3c39ef3bf076908a806d56ad4f0881ac4d2bef0bcace4f6901d4e754a/freezer.state
	I1001 23:11:22.695450   89650 api_server.go:204] freezer state: "THAWED"
	I1001 23:11:22.695496   89650 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1001 23:11:22.700077   89650 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1001 23:11:22.700104   89650 status.go:463] ha-724461-m03 apiserver status = Running (err=<nil>)
	I1001 23:11:22.700113   89650 status.go:176] ha-724461-m03 status: &{Name:ha-724461-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 23:11:22.700134   89650 status.go:174] checking status of ha-724461-m04 ...
	I1001 23:11:22.700429   89650 cli_runner.go:164] Run: docker container inspect ha-724461-m04 --format={{.State.Status}}
	I1001 23:11:22.718194   89650 status.go:371] ha-724461-m04 host status = "Running" (err=<nil>)
	I1001 23:11:22.718231   89650 host.go:66] Checking if "ha-724461-m04" exists ...
	I1001 23:11:22.718614   89650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-724461-m04
	I1001 23:11:22.738366   89650 host.go:66] Checking if "ha-724461-m04" exists ...
	I1001 23:11:22.738674   89650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 23:11:22.738718   89650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-724461-m04
	I1001 23:11:22.757794   89650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/ha-724461-m04/id_rsa Username:docker}
	I1001 23:11:22.849968   89650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:11:22.861815   89650 status.go:176] ha-724461-m04 status: &{Name:ha-724461-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-724461 node start m02 -v=7 --alsologtostderr: (29.588851888s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (176.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-724461 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-724461 -v=7 --alsologtostderr
E1001 23:11:59.260859   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:11:59.267295   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:11:59.278753   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:11:59.300202   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:11:59.341640   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:11:59.423053   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:11:59.584567   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:11:59.906247   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:12:00.547708   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:12:01.829900   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:12:04.392767   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:12:09.514336   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:12:19.756065   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-724461 -v=7 --alsologtostderr: (36.701412049s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-724461 --wait=true -v=7 --alsologtostderr
E1001 23:12:40.238826   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:13:21.200934   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:43.123325   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-724461 --wait=true -v=7 --alsologtostderr: (2m19.235987492s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-724461
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (176.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-724461 node delete m03 -v=7 --alsologtostderr: (11.32224249s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 stop -v=7 --alsologtostderr
E1001 23:15:28.085862   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-724461 stop -v=7 --alsologtostderr: (35.37709504s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-724461 status -v=7 --alsologtostderr: exit status 7 (102.039691ms)

                                                
                                                
-- stdout --
	ha-724461
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-724461-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-724461-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:15:39.059537  107275 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:15:39.059788  107275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:15:39.059796  107275 out.go:358] Setting ErrFile to fd 2...
	I1001 23:15:39.059801  107275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:15:39.060004  107275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	I1001 23:15:39.060174  107275 out.go:352] Setting JSON to false
	I1001 23:15:39.060197  107275 mustload.go:65] Loading cluster: ha-724461
	I1001 23:15:39.060306  107275 notify.go:220] Checking for updates...
	I1001 23:15:39.060704  107275 config.go:182] Loaded profile config "ha-724461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:15:39.060730  107275 status.go:174] checking status of ha-724461 ...
	I1001 23:15:39.061235  107275 cli_runner.go:164] Run: docker container inspect ha-724461 --format={{.State.Status}}
	I1001 23:15:39.081089  107275 status.go:371] ha-724461 host status = "Stopped" (err=<nil>)
	I1001 23:15:39.081110  107275 status.go:384] host is not running, skipping remaining checks
	I1001 23:15:39.081116  107275 status.go:176] ha-724461 status: &{Name:ha-724461 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 23:15:39.081137  107275 status.go:174] checking status of ha-724461-m02 ...
	I1001 23:15:39.081398  107275 cli_runner.go:164] Run: docker container inspect ha-724461-m02 --format={{.State.Status}}
	I1001 23:15:39.099203  107275 status.go:371] ha-724461-m02 host status = "Stopped" (err=<nil>)
	I1001 23:15:39.099243  107275 status.go:384] host is not running, skipping remaining checks
	I1001 23:15:39.099255  107275 status.go:176] ha-724461-m02 status: &{Name:ha-724461-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 23:15:39.099286  107275 status.go:174] checking status of ha-724461-m04 ...
	I1001 23:15:39.099560  107275 cli_runner.go:164] Run: docker container inspect ha-724461-m04 --format={{.State.Status}}
	I1001 23:15:39.117149  107275 status.go:371] ha-724461-m04 host status = "Stopped" (err=<nil>)
	I1001 23:15:39.117172  107275 status.go:384] host is not running, skipping remaining checks
	I1001 23:15:39.117178  107275 status.go:176] ha-724461-m04 status: &{Name:ha-724461-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-724461 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-724461 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m7.475849466s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (66.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-724461 --control-plane -v=7 --alsologtostderr
E1001 23:16:59.260470   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:17:26.965508   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-724461 --control-plane -v=7 --alsologtostderr: (1m5.557664217s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-724461 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (66.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (71.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-133667 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-133667 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m11.020970794s)
--- PASS: TestJSONOutput/start/Command (71.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-133667 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-133667 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-133667 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-133667 --output=json --user=testUser: (5.769652193s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-772243 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-772243 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.51666ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3f305af2-12b1-4e99-bdd9-86bd2a736c34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-772243] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"88b4f766-d876-4966-8463-cf70e3b3d8c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19740"}}
	{"specversion":"1.0","id":"c7bad4e0-54aa-48e5-9d1a-eb8cf431e161","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3930b0f4-fd79-4ea3-85ca-7b9384d69d40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig"}}
	{"specversion":"1.0","id":"c52a3018-449b-4e98-9609-c5e31e988c0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube"}}
	{"specversion":"1.0","id":"9c9a25db-ac39-4880-9b63-3bd9660d5f8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"16182fb7-aff9-4ebe-b99a-442a9c2b1b6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a1b7396f-c008-41f5-854d-404a230c187b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-772243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-772243
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-168798 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-168798 --network=: (24.891144189s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-168798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-168798
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-168798: (1.927702523s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.84s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-256690 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-256690 --network=bridge: (24.500298102s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-256690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-256690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-256690: (1.82023108s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.34s)

                                                
                                    
x
+
TestKicExistingNetwork (23.44s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1001 23:20:18.245342   16095 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1001 23:20:18.261827   16095 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1001 23:20:18.261920   16095 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1001 23:20:18.261947   16095 cli_runner.go:164] Run: docker network inspect existing-network
W1001 23:20:18.278776   16095 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1001 23:20:18.278814   16095 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1001 23:20:18.278829   16095 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1001 23:20:18.278961   16095 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1001 23:20:18.295775   16095 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8e3a87630b6a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6f:13:54:28} reservation:<nil>}
I1001 23:20:18.296257   16095 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001623040}
I1001 23:20:18.296291   16095 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1001 23:20:18.296340   16095 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1001 23:20:18.358568   16095 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-592044 --network=existing-network
E1001 23:20:28.086521   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-592044 --network=existing-network: (21.380935555s)
helpers_test.go:175: Cleaning up "existing-network-592044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-592044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-592044: (1.916420428s)
I1001 23:20:41.673456   16095 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.44s)

                                                
                                    
x
+
TestKicCustomSubnet (23.3s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-632347 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-632347 --subnet=192.168.60.0/24: (21.336092316s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-632347 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-632347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-632347
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-632347: (1.941184648s)
--- PASS: TestKicCustomSubnet (23.30s)

                                                
                                    
x
+
TestKicStaticIP (26.24s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-404729 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-404729 --static-ip=192.168.200.200: (24.112211353s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-404729 ip
helpers_test.go:175: Cleaning up "static-ip-404729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-404729
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-404729: (2.002269238s)
--- PASS: TestKicStaticIP (26.24s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (51.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-721972 --driver=docker  --container-runtime=crio
E1001 23:21:51.153101   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-721972 --driver=docker  --container-runtime=crio: (23.642512624s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-734751 --driver=docker  --container-runtime=crio
E1001 23:21:59.260156   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-734751 --driver=docker  --container-runtime=crio: (22.939581509s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-721972
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-734751
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-734751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-734751
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-734751: (1.84937702s)
helpers_test.go:175: Cleaning up "first-721972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-721972
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-721972: (1.824924418s)
--- PASS: TestMinikubeProfile (51.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-133611 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-133611 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.192676604s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-133611 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-148474 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-148474 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.210664442s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-148474 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-133611 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-133611 --alsologtostderr -v=5: (1.610422168s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-148474 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-148474
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-148474: (1.171422155s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-148474
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-148474: (6.217328781s)
--- PASS: TestMountStart/serial/RestartStopped (7.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-148474 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-346824 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-346824 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m4.787977471s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-346824 -- rollout status deployment/busybox: (15.544484599s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- exec busybox-7dff88458-m77mw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- exec busybox-7dff88458-wlwsx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- exec busybox-7dff88458-m77mw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- exec busybox-7dff88458-wlwsx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- exec busybox-7dff88458-m77mw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- exec busybox-7dff88458-wlwsx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- exec busybox-7dff88458-m77mw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- exec busybox-7dff88458-m77mw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- exec busybox-7dff88458-wlwsx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346824 -- exec busybox-7dff88458-wlwsx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-346824 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-346824 -v 3 --alsologtostderr: (23.906660533s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.51s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-346824 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp testdata/cp-test.txt multinode-346824:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp multinode-346824:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2143562260/001/cp-test_multinode-346824.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp multinode-346824:/home/docker/cp-test.txt multinode-346824-m02:/home/docker/cp-test_multinode-346824_multinode-346824-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m02 "sudo cat /home/docker/cp-test_multinode-346824_multinode-346824-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp multinode-346824:/home/docker/cp-test.txt multinode-346824-m03:/home/docker/cp-test_multinode-346824_multinode-346824-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m03 "sudo cat /home/docker/cp-test_multinode-346824_multinode-346824-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp testdata/cp-test.txt multinode-346824-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp multinode-346824-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2143562260/001/cp-test_multinode-346824-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp multinode-346824-m02:/home/docker/cp-test.txt multinode-346824:/home/docker/cp-test_multinode-346824-m02_multinode-346824.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824 "sudo cat /home/docker/cp-test_multinode-346824-m02_multinode-346824.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp multinode-346824-m02:/home/docker/cp-test.txt multinode-346824-m03:/home/docker/cp-test_multinode-346824-m02_multinode-346824-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m03 "sudo cat /home/docker/cp-test_multinode-346824-m02_multinode-346824-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp testdata/cp-test.txt multinode-346824-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp multinode-346824-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2143562260/001/cp-test_multinode-346824-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp multinode-346824-m03:/home/docker/cp-test.txt multinode-346824:/home/docker/cp-test_multinode-346824-m03_multinode-346824.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824 "sudo cat /home/docker/cp-test_multinode-346824-m03_multinode-346824.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 cp multinode-346824-m03:/home/docker/cp-test.txt multinode-346824-m02:/home/docker/cp-test_multinode-346824-m03_multinode-346824-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 ssh -n multinode-346824-m02 "sudo cat /home/docker/cp-test_multinode-346824-m03_multinode-346824-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-346824 node stop m03: (1.175901781s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-346824 status: exit status 7 (464.365655ms)

                                                
                                                
-- stdout --
	multinode-346824
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-346824-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-346824-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-346824 status --alsologtostderr: exit status 7 (458.919626ms)

                                                
                                                
-- stdout --
	multinode-346824
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-346824-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-346824-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:24:50.504143  172350 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:24:50.504420  172350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:24:50.504430  172350 out.go:358] Setting ErrFile to fd 2...
	I1001 23:24:50.504434  172350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:24:50.504734  172350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	I1001 23:24:50.504958  172350 out.go:352] Setting JSON to false
	I1001 23:24:50.504981  172350 mustload.go:65] Loading cluster: multinode-346824
	I1001 23:24:50.505098  172350 notify.go:220] Checking for updates...
	I1001 23:24:50.505481  172350 config.go:182] Loaded profile config "multinode-346824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:24:50.505502  172350 status.go:174] checking status of multinode-346824 ...
	I1001 23:24:50.505946  172350 cli_runner.go:164] Run: docker container inspect multinode-346824 --format={{.State.Status}}
	I1001 23:24:50.526063  172350 status.go:371] multinode-346824 host status = "Running" (err=<nil>)
	I1001 23:24:50.526088  172350 host.go:66] Checking if "multinode-346824" exists ...
	I1001 23:24:50.526371  172350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-346824
	I1001 23:24:50.544619  172350 host.go:66] Checking if "multinode-346824" exists ...
	I1001 23:24:50.544923  172350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 23:24:50.544967  172350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-346824
	I1001 23:24:50.562691  172350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/multinode-346824/id_rsa Username:docker}
	I1001 23:24:50.653517  172350 ssh_runner.go:195] Run: systemctl --version
	I1001 23:24:50.657649  172350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:24:50.668899  172350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:24:50.715913  172350 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-01 23:24:50.705724319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 23:24:50.716536  172350 kubeconfig.go:125] found "multinode-346824" server: "https://192.168.67.2:8443"
	I1001 23:24:50.716562  172350 api_server.go:166] Checking apiserver status ...
	I1001 23:24:50.716593  172350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:24:50.726939  172350 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1480/cgroup
	I1001 23:24:50.735866  172350 api_server.go:182] apiserver freezer: "2:freezer:/docker/6132fab31ac45bd4b95470061ce8c01f6fc764ea40b73a2a14f4efbfe4891f2a/crio/crio-055ccf0b7771cdc2392ab15f34e5c630ba702a8b83392c7663f9f140e39a480e"
	I1001 23:24:50.735924  172350 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6132fab31ac45bd4b95470061ce8c01f6fc764ea40b73a2a14f4efbfe4891f2a/crio/crio-055ccf0b7771cdc2392ab15f34e5c630ba702a8b83392c7663f9f140e39a480e/freezer.state
	I1001 23:24:50.743967  172350 api_server.go:204] freezer state: "THAWED"
	I1001 23:24:50.743995  172350 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1001 23:24:50.747587  172350 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1001 23:24:50.747610  172350 status.go:463] multinode-346824 apiserver status = Running (err=<nil>)
	I1001 23:24:50.747619  172350 status.go:176] multinode-346824 status: &{Name:multinode-346824 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 23:24:50.747634  172350 status.go:174] checking status of multinode-346824-m02 ...
	I1001 23:24:50.747874  172350 cli_runner.go:164] Run: docker container inspect multinode-346824-m02 --format={{.State.Status}}
	I1001 23:24:50.765234  172350 status.go:371] multinode-346824-m02 host status = "Running" (err=<nil>)
	I1001 23:24:50.765260  172350 host.go:66] Checking if "multinode-346824-m02" exists ...
	I1001 23:24:50.765520  172350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-346824-m02
	I1001 23:24:50.783099  172350 host.go:66] Checking if "multinode-346824-m02" exists ...
	I1001 23:24:50.783387  172350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 23:24:50.783430  172350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-346824-m02
	I1001 23:24:50.800921  172350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19740-9314/.minikube/machines/multinode-346824-m02/id_rsa Username:docker}
	I1001 23:24:50.889617  172350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:24:50.900558  172350 status.go:176] multinode-346824-m02 status: &{Name:multinode-346824-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1001 23:24:50.900604  172350 status.go:174] checking status of multinode-346824-m03 ...
	I1001 23:24:50.900939  172350 cli_runner.go:164] Run: docker container inspect multinode-346824-m03 --format={{.State.Status}}
	I1001 23:24:50.918064  172350 status.go:371] multinode-346824-m03 host status = "Stopped" (err=<nil>)
	I1001 23:24:50.918099  172350 status.go:384] host is not running, skipping remaining checks
	I1001 23:24:50.918107  172350 status.go:176] multinode-346824-m03 status: &{Name:multinode-346824-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-346824 node start m03 -v=7 --alsologtostderr: (8.360321046s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (108.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-346824
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-346824
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-346824: (24.672947119s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-346824 --wait=true -v=8 --alsologtostderr
E1001 23:25:28.087252   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-346824 --wait=true -v=8 --alsologtostderr: (1m23.735158753s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-346824
--- PASS: TestMultiNode/serial/RestartKeepsNodes (108.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-346824 node delete m03: (4.671276777s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 stop
E1001 23:26:59.260381   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-346824 stop: (23.559047499s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-346824 status: exit status 7 (87.306204ms)

                                                
                                                
-- stdout --
	multinode-346824
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-346824-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-346824 status --alsologtostderr: exit status 7 (84.076693ms)

                                                
                                                
-- stdout --
	multinode-346824
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-346824-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:27:17.362632  182046 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:27:17.362895  182046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:27:17.362903  182046 out.go:358] Setting ErrFile to fd 2...
	I1001 23:27:17.362908  182046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:27:17.363076  182046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	I1001 23:27:17.363239  182046 out.go:352] Setting JSON to false
	I1001 23:27:17.363264  182046 mustload.go:65] Loading cluster: multinode-346824
	I1001 23:27:17.363314  182046 notify.go:220] Checking for updates...
	I1001 23:27:17.363662  182046 config.go:182] Loaded profile config "multinode-346824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:27:17.363680  182046 status.go:174] checking status of multinode-346824 ...
	I1001 23:27:17.364087  182046 cli_runner.go:164] Run: docker container inspect multinode-346824 --format={{.State.Status}}
	I1001 23:27:17.383019  182046 status.go:371] multinode-346824 host status = "Stopped" (err=<nil>)
	I1001 23:27:17.383050  182046 status.go:384] host is not running, skipping remaining checks
	I1001 23:27:17.383056  182046 status.go:176] multinode-346824 status: &{Name:multinode-346824 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 23:27:17.383095  182046 status.go:174] checking status of multinode-346824-m02 ...
	I1001 23:27:17.383376  182046 cli_runner.go:164] Run: docker container inspect multinode-346824-m02 --format={{.State.Status}}
	I1001 23:27:17.401050  182046 status.go:371] multinode-346824-m02 host status = "Stopped" (err=<nil>)
	I1001 23:27:17.401071  182046 status.go:384] host is not running, skipping remaining checks
	I1001 23:27:17.401078  182046 status.go:176] multinode-346824-m02 status: &{Name:multinode-346824-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-346824 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-346824 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (59.63964671s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346824 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-346824
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-346824-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-346824-m02 --driver=docker  --container-runtime=crio: exit status 14 (64.591763ms)

                                                
                                                
-- stdout --
	* [multinode-346824-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-346824-m02' is duplicated with machine name 'multinode-346824-m02' in profile 'multinode-346824'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-346824-m03 --driver=docker  --container-runtime=crio
E1001 23:28:22.329784   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-346824-m03 --driver=docker  --container-runtime=crio: (23.514874867s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-346824
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-346824: exit status 80 (263.34738ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-346824 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-346824-m03 already exists in multinode-346824-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-346824-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-346824-m03: (1.809397608s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.70s)

                                                
                                    
x
+
TestPreload (104.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-729483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-729483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m19.467133691s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-729483 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-729483 image pull gcr.io/k8s-minikube/busybox: (1.141959907s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-729483
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-729483: (5.701176074s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-729483 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1001 23:30:28.087318   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-729483 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.444191182s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-729483 image list
helpers_test.go:175: Cleaning up "test-preload-729483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-729483
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-729483: (1.936218913s)
--- PASS: TestPreload (104.90s)

                                                
                                    
x
+
TestScheduledStopUnix (99.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-741678 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-741678 --memory=2048 --driver=docker  --container-runtime=crio: (23.32463795s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-741678 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-741678 -n scheduled-stop-741678
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-741678 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1001 23:30:55.811697   16095 retry.go:31] will retry after 131.752µs: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.812838   16095 retry.go:31] will retry after 218.882µs: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.814006   16095 retry.go:31] will retry after 245.154µs: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.815155   16095 retry.go:31] will retry after 463.183µs: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.816267   16095 retry.go:31] will retry after 538.606µs: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.817425   16095 retry.go:31] will retry after 454.304µs: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.818557   16095 retry.go:31] will retry after 869.545µs: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.819693   16095 retry.go:31] will retry after 1.425607ms: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.821920   16095 retry.go:31] will retry after 1.727073ms: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.824180   16095 retry.go:31] will retry after 4.090481ms: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.828356   16095 retry.go:31] will retry after 3.0803ms: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.831573   16095 retry.go:31] will retry after 4.523377ms: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.836804   16095 retry.go:31] will retry after 10.253717ms: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.848076   16095 retry.go:31] will retry after 14.674369ms: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
I1001 23:30:55.863366   16095 retry.go:31] will retry after 35.665758ms: open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/scheduled-stop-741678/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-741678 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-741678 -n scheduled-stop-741678
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-741678
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-741678 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1001 23:31:59.260115   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-741678
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-741678: exit status 7 (65.046013ms)

                                                
                                                
-- stdout --
	scheduled-stop-741678
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-741678 -n scheduled-stop-741678
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-741678 -n scheduled-stop-741678: exit status 7 (63.884699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-741678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-741678
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-741678: (4.667099902s)
--- PASS: TestScheduledStopUnix (99.27s)

                                                
                                    
x
+
TestInsufficientStorage (9.79s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-496192 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-496192 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.486910317s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"05108af9-0312-4e41-83e1-156e0b78156d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-496192] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd8def48-726c-47c2-ae7a-5a394f22ba1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19740"}}
	{"specversion":"1.0","id":"940d3ddc-224e-4e03-8780-84fa8a50307e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3973ea5f-dcd3-4da0-bb3c-a48a7b0ed585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig"}}
	{"specversion":"1.0","id":"0aea5165-9a8e-49f1-82e5-1a14efcd2761","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube"}}
	{"specversion":"1.0","id":"823e1e6f-a869-4362-9853-4ea8beedb6f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c5607203-51b2-4766-acb4-d2043a59cac0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2d3ee4f9-2976-447d-8d0f-aab380be4986","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bcc23167-12b3-4902-8c02-d58886957803","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8ffd3bb9-e535-480f-a0c7-ea5a8b64146c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9263cc2a-7873-418e-b609-cc8e47cd7682","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"67428fad-a94f-4132-bee7-8f1aef154817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-496192\" primary control-plane node in \"insufficient-storage-496192\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"12589e1a-368b-4c55-b620-2c7ced27a344","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ee7131c-1d23-4b97-aae3-15adb4356bb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b07066ed-676e-489a-b649-47292d70ef8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-496192 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-496192 --output=json --layout=cluster: exit status 7 (258.23947ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-496192","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-496192","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 23:32:19.095894  204290 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-496192" does not appear in /home/jenkins/minikube-integration/19740-9314/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-496192 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-496192 --output=json --layout=cluster: exit status 7 (257.01768ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-496192","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-496192","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 23:32:19.352838  204388 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-496192" does not appear in /home/jenkins/minikube-integration/19740-9314/kubeconfig
	E1001 23:32:19.363434  204388 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/insufficient-storage-496192/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-496192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-496192
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-496192: (1.785572736s)
--- PASS: TestInsufficientStorage (9.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (135.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2134872504 start -p running-upgrade-400984 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2134872504 start -p running-upgrade-400984 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m2.306158426s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-400984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-400984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.457752195s)
helpers_test.go:175: Cleaning up "running-upgrade-400984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-400984
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-400984: (3.989875202s)
--- PASS: TestRunningBinaryUpgrade (135.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (348.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-283878 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-283878 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.766673288s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-283878
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-283878: (3.784902761s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-283878 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-283878 status --format={{.Host}}: exit status 7 (78.122169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-283878 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-283878 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.63685947s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-283878 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-283878 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-283878 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (73.580149ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-283878] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-283878
	    minikube start -p kubernetes-upgrade-283878 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2838782 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-283878 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-283878 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-283878 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.312583704s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-283878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-283878
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-283878: (2.345212903s)
--- PASS: TestKubernetesUpgrade (348.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (132.11s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2550347486 start -p missing-upgrade-319381 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2550347486 start -p missing-upgrade-319381 --memory=2200 --driver=docker  --container-runtime=crio: (1m3.651882686s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-319381
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-319381: (17.063187228s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-319381
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-319381 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-319381 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.9023623s)
helpers_test.go:175: Cleaning up "missing-upgrade-319381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-319381
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-319381: (3.021180478s)
--- PASS: TestMissingContainerUpgrade (132.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-361161 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-361161 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (72.514257ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-361161] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-361161 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-361161 --driver=docker  --container-runtime=crio: (32.896446485s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-361161 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-361161 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-361161 --no-kubernetes --driver=docker  --container-runtime=crio: (9.900655724s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-361161 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-361161 status -o json: exit status 2 (313.59774ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-361161","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-361161
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-361161: (1.946431494s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-361161 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-361161 --no-kubernetes --driver=docker  --container-runtime=crio: (7.404671518s)
--- PASS: TestNoKubernetes/serial/Start (7.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-361161 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-361161 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.2884ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (6.149879629s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (7.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-361161
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-361161: (1.204709595s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-361161 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-361161 --driver=docker  --container-runtime=crio: (6.529105589s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-361161 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-361161 "sudo systemctl is-active --quiet service kubelet": exit status 1 (246.223008ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (79.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.293169745 start -p stopped-upgrade-054487 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.293169745 start -p stopped-upgrade-054487 --memory=2200 --vm-driver=docker  --container-runtime=crio: (33.626863562s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.293169745 -p stopped-upgrade-054487 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.293169745 -p stopped-upgrade-054487 stop: (2.718782785s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-054487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-054487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.580204816s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (79.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-534078 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-534078 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (172.30557ms)

                                                
                                                
-- stdout --
	* [false-534078] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:34:43.409689  239828 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:34:43.409826  239828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:34:43.409836  239828 out.go:358] Setting ErrFile to fd 2...
	I1001 23:34:43.409840  239828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:34:43.410024  239828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9314/.minikube/bin
	I1001 23:34:43.410629  239828 out.go:352] Setting JSON to false
	I1001 23:34:43.411818  239828 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4630,"bootTime":1727821053,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:34:43.411943  239828 start.go:139] virtualization: kvm guest
	I1001 23:34:43.414502  239828 out.go:177] * [false-534078] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:34:43.416109  239828 notify.go:220] Checking for updates...
	I1001 23:34:43.416121  239828 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:34:43.417822  239828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:34:43.419387  239828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9314/kubeconfig
	I1001 23:34:43.420814  239828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9314/.minikube
	I1001 23:34:43.422494  239828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:34:43.424015  239828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:34:43.426202  239828 config.go:182] Loaded profile config "force-systemd-env-622517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:34:43.426450  239828 config.go:182] Loaded profile config "kubernetes-upgrade-283878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:34:43.426633  239828 config.go:182] Loaded profile config "stopped-upgrade-054487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1001 23:34:43.426782  239828 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:34:43.454724  239828 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:34:43.454806  239828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:34:43.514566  239828 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:83 SystemTime:2024-10-01 23:34:43.504105842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1001 23:34:43.514661  239828 docker.go:318] overlay module found
	I1001 23:34:43.516466  239828 out.go:177] * Using the docker driver based on user configuration
	I1001 23:34:43.517778  239828 start.go:297] selected driver: docker
	I1001 23:34:43.517795  239828 start.go:901] validating driver "docker" against <nil>
	I1001 23:34:43.517808  239828 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:34:43.520082  239828 out.go:201] 
	W1001 23:34:43.521331  239828 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1001 23:34:43.522629  239828 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-534078 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-534078" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Oct 2024 23:34:29 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-283878
contexts:
- context:
cluster: kubernetes-upgrade-283878
user: kubernetes-upgrade-283878
name: kubernetes-upgrade-283878
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-283878
user:
client-certificate: /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kubernetes-upgrade-283878/client.crt
client-key: /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kubernetes-upgrade-283878/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-534078

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534078"

                                                
                                                
----------------------- debugLogs end: false-534078 [took: 2.990512154s] --------------------------------
helpers_test.go:175: Cleaning up "false-534078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-534078
--- PASS: TestNetworkPlugins/group/false (3.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-054487
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-054487: (2.972310642s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.97s)

                                                
                                    
x
+
TestPause/serial/Start (42.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-653491 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-653491 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (42.689937269s)
--- PASS: TestPause/serial/Start (42.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.37953471s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.28s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-653491 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-653491 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.270056131s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-534078 "pgrep -a kubelet"
I1001 23:36:22.361721   16095 config.go:182] Loaded profile config "auto-534078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-534078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t7l2x" [c85857fa-74cf-4b2d-98dd-d725e9c46525] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t7l2x" [c85857fa-74cf-4b2d-98dd-d725e9c46525] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004105793s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-534078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-653491 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-653491 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-653491 --output=json --layout=cluster: exit status 2 (312.121299ms)

                                                
                                                
-- stdout --
	{"Name":"pause-653491","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-653491","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-653491 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.71s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-653491 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.71s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.8s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-653491 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-653491 --alsologtostderr -v=5: (2.795767683s)
--- PASS: TestPause/serial/DeletePaused (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.210640315s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.21s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-653491
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-653491: exit status 1 (20.040287ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-653491: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1001 23:36:59.260394   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (56.382277269s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9k7zz" [1f54513b-373d-493b-a91a-0be8fc5f58df] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0040398s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-534078 "pgrep -a kubelet"
I1001 23:37:39.393528   16095 config.go:182] Loaded profile config "kindnet-534078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-534078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dwf7z" [f41f1c9b-7675-481d-9c1a-c30b914e43ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dwf7z" [f41f1c9b-7675-481d-9c1a-c30b914e43ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004138753s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-534078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-l6ld2" [92fe8efa-8d43-4000-9c06-0647ebc95c50] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004709915s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-534078 "pgrep -a kubelet"
I1001 23:37:55.916533   16095 config.go:182] Loaded profile config "calico-534078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-534078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-54qvw" [561e6fe9-4d36-4ecb-b92e-e0ee63d58c94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-54qvw" [561e6fe9-4d36-4ecb-b92e-e0ee63d58c94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003503466s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-534078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.617730444s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1001 23:38:31.154897   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m5.047647961s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.691134167s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-534078 "pgrep -a kubelet"
I1001 23:38:59.067766   16095 config.go:182] Loaded profile config "custom-flannel-534078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-534078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7ggbr" [d1ba82d1-4b30-4bc1-95e8-9f47d349c219] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7ggbr" [d1ba82d1-4b30-4bc1-95e8-9f47d349c219] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004458593s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-534078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-534078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.465955154s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7scn2" [8c16e70e-c15c-4d8e-b1cd-5143102e8f5f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00446381s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-534078 "pgrep -a kubelet"
I1001 23:39:27.916340   16095 config.go:182] Loaded profile config "flannel-534078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-534078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8ddd4" [07167e40-f808-45f4-bed4-18cfd31f13cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8ddd4" [07167e40-f808-45f4-bed4-18cfd31f13cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003547908s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (129.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-883527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-883527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m9.76320401s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (129.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-534078 "pgrep -a kubelet"
I1001 23:39:32.093169   16095 config.go:182] Loaded profile config "enable-default-cni-534078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-534078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nlknx" [6deeac83-f22a-482c-93ab-261e6896e5e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nlknx" [6deeac83-f22a-482c-93ab-261e6896e5e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004380293s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-534078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-534078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-483534 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-483534 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (55.50831843s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-027517 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-027517 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (41.485928023s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-534078 "pgrep -a kubelet"
I1001 23:40:26.757549   16095 config.go:182] Loaded profile config "bridge-534078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-534078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mhwwp" [a400d3f5-02bd-4963-a2fb-4d947efc7217] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1001 23:40:28.086022   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mhwwp" [a400d3f5-02bd-4963-a2fb-4d947efc7217] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003516893s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-534078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-534078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)
E1001 23:44:42.139153   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:42.553088   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:52.795358   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:02.331736   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:02.620533   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:13.276685   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:16.994826   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:21.177364   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:26.993396   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:26.999845   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:27.011363   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:27.032751   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:27.074215   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:27.155669   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:27.317123   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:27.638864   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:28.086673   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/addons-003557/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:28.280182   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:29.562011   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:32.123367   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:33.512444   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:45:37.245615   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-027517 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22b27c30-08cb-44a9-bf33-a980c036c4b7] Pending
helpers_test.go:344: "busybox" [22b27c30-08cb-44a9-bf33-a980c036c4b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [22b27c30-08cb-44a9-bf33-a980c036c4b7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004000488s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-027517 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-027517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-027517 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-671645 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-671645 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (28.116185817s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-027517 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-027517 --alsologtostderr -v=3: (11.91698532s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-483534 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1797fc74-1ce1-45c1-8268-4970a7b351df] Pending
helpers_test.go:344: "busybox" [1797fc74-1ce1-45c1-8268-4970a7b351df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1797fc74-1ce1-45c1-8268-4970a7b351df] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003482085s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-483534 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-483534 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-483534 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-483534 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-483534 --alsologtostderr -v=3: (11.975234874s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-027517 -n default-k8s-diff-port-027517
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-027517 -n default-k8s-diff-port-027517: exit status 7 (66.264533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-027517 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (277.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-027517 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-027517 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m37.070242038s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-027517 -n default-k8s-diff-port-027517
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (277.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483534 -n no-preload-483534
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483534 -n no-preload-483534: exit status 7 (79.987073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-483534 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-483534 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1001 23:41:22.552512   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:41:22.558904   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:41:22.570306   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:41:22.591683   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:41:22.633111   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:41:22.714689   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:41:22.876373   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:41:23.198677   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:41:23.840428   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-483534 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.981370081s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483534 -n no-preload-483534
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-671645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-671645 --alsologtostderr -v=3
E1001 23:41:25.122395   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-671645 --alsologtostderr -v=3: (1.242063209s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-671645 -n newest-cni-671645
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-671645 -n newest-cni-671645: exit status 7 (79.179663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-671645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-671645 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1001 23:41:27.684723   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:41:32.807026   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-671645 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (14.380648395s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-671645 -n newest-cni-671645
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-883527 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6acc006a-517d-41a4-b801-991dddb7d844] Pending
helpers_test.go:344: "busybox" [6acc006a-517d-41a4-b801-991dddb7d844] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6acc006a-517d-41a4-b801-991dddb7d844] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004090042s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-883527 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-671645 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-671645 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-671645 -n newest-cni-671645
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-671645 -n newest-cni-671645: exit status 2 (328.858698ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-671645 -n newest-cni-671645
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-671645 -n newest-cni-671645: exit status 2 (324.680529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-671645 --alsologtostderr -v=1
E1001 23:41:43.049340   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-671645 -n newest-cni-671645
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-671645 -n newest-cni-671645
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-272564 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-272564 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (39.686107146s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (39.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-883527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-883527 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-883527 --alsologtostderr -v=3
E1001 23:41:59.260284   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/functional-077195/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-883527 --alsologtostderr -v=3: (12.037555904s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-883527 -n old-k8s-version-883527
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-883527 -n old-k8s-version-883527: exit status 7 (72.047748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-883527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (143.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-883527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1001 23:42:03.531447   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-883527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m22.761503693s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-883527 -n old-k8s-version-883527
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (143.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-272564 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7476f1ae-090c-48e2-a60d-f67aa99fbc93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7476f1ae-090c-48e2-a60d-f67aa99fbc93] Running
E1001 23:42:33.134752   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:33.141177   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:33.152613   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:33.174056   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:33.215606   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:33.297060   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:33.458603   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:33.780754   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:34.422476   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005284137s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-272564 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-272564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1001 23:42:35.704632   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-272564 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-272564 --alsologtostderr -v=3
E1001 23:42:38.265975   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:43.388272   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:44.493261   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-272564 --alsologtostderr -v=3: (12.433579075s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-272564 -n embed-certs-272564
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-272564 -n embed-certs-272564: exit status 7 (76.516995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-272564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-272564 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1001 23:42:49.652925   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:49.659344   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:49.670794   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:49.692191   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:49.733581   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:49.815176   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:49.976725   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:50.298320   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:50.940010   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:52.221387   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:53.629621   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:54.783629   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:42:59.905439   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:10.146871   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:14.111004   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:30.628824   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:55.073384   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kindnet-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:59.237956   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:59.244362   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:59.255799   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:59.277428   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:59.318872   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:59.400383   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:59.561885   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:43:59.883962   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:00.525921   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:01.807828   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:04.369884   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:06.415206   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/auto-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:09.492022   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:11.591084   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/calico-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:19.733404   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/custom-flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:21.646108   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:21.652541   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:21.663777   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:21.685222   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:21.726645   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:21.808140   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:21.969569   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:22.290787   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:22.932119   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-272564 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.240365536s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-272564 -n embed-certs-272564
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-h89l9" [6c538aaf-119f-4ffa-b9e0-6bbe1c5c641d] Running
E1001 23:44:24.213705   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:26.775442   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003752977s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-h89l9" [6c538aaf-119f-4ffa-b9e0-6bbe1c5c641d] Running
E1001 23:44:31.897332   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:32.299768   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:32.306178   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:32.317594   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:32.338991   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:32.380357   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:32.461797   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:32.623508   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:32.945433   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:33.587512   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:34.869791   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004164987s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-883527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-883527 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-883527 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-883527 -n old-k8s-version-883527
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-883527 -n old-k8s-version-883527: exit status 2 (286.878041ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-883527 -n old-k8s-version-883527
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-883527 -n old-k8s-version-883527: exit status 2 (290.651438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-883527 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-883527 -n old-k8s-version-883527
E1001 23:44:37.431648   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-883527 -n old-k8s-version-883527
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wqpfd" [a9aa700b-f48b-49c6-9e82-1ae792869b2b] Running
E1001 23:45:43.582780   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/flannel-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003675631s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gp8vw" [e42ef51d-65a2-482a-9dda-2ebd005d4879] Running
E1001 23:45:47.487546   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/bridge-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004191029s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wqpfd" [a9aa700b-f48b-49c6-9e82-1ae792869b2b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004321737s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-483534 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gp8vw" [e42ef51d-65a2-482a-9dda-2ebd005d4879] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003692681s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-027517 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-483534 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-483534 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483534 -n no-preload-483534
E1001 23:45:54.238958   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483534 -n no-preload-483534: exit status 2 (280.503009ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-483534 -n no-preload-483534
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-483534 -n no-preload-483534: exit status 2 (282.242753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-483534 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483534 -n no-preload-483534
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-483534 -n no-preload-483534
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-027517 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-027517 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-027517 -n default-k8s-diff-port-027517
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-027517 -n default-k8s-diff-port-027517: exit status 2 (290.498918ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-027517 -n default-k8s-diff-port-027517
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-027517 -n default-k8s-diff-port-027517: exit status 2 (332.642461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-027517 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-027517 -n default-k8s-diff-port-027517
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-027517 -n default-k8s-diff-port-027517
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m6jsz" [890d2560-2eb7-436b-9770-e22e0ed081e5] Running
E1001 23:47:16.160807   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/enable-default-cni-534078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003816868s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m6jsz" [890d2560-2eb7-436b-9770-e22e0ed081e5] Running
E1001 23:47:20.772238   16095 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/old-k8s-version-883527/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003787263s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-272564 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-272564 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-272564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-272564 -n embed-certs-272564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-272564 -n embed-certs-272564: exit status 2 (284.877685ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-272564 -n embed-certs-272564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-272564 -n embed-certs-272564: exit status 2 (281.246352ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-272564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-272564 -n embed-certs-272564
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-272564 -n embed-certs-272564
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.60s)

                                                
                                    

Test skip (25/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:783: skipping: crio not supported
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-003557 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-534078 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-534078" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Oct 2024 23:34:29 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-283878
contexts:
- context:
cluster: kubernetes-upgrade-283878
user: kubernetes-upgrade-283878
name: kubernetes-upgrade-283878
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-283878
user:
client-certificate: /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kubernetes-upgrade-283878/client.crt
client-key: /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kubernetes-upgrade-283878/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-534078

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534078"

                                                
                                                
----------------------- debugLogs end: kubenet-534078 [took: 6.448051704s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-534078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-534078
--- SKIP: TestNetworkPlugins/group/kubenet (6.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-534078 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-534078" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Oct 2024 23:34:29 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-283878
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-9314/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Oct 2024 23:34:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: stopped-upgrade-054487
contexts:
- context:
cluster: kubernetes-upgrade-283878
user: kubernetes-upgrade-283878
name: kubernetes-upgrade-283878
- context:
cluster: stopped-upgrade-054487
user: stopped-upgrade-054487
name: stopped-upgrade-054487
current-context: stopped-upgrade-054487
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-283878
user:
client-certificate: /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kubernetes-upgrade-283878/client.crt
client-key: /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/kubernetes-upgrade-283878/client.key
- name: stopped-upgrade-054487
user:
client-certificate: /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/stopped-upgrade-054487/client.crt
client-key: /home/jenkins/minikube-integration/19740-9314/.minikube/profiles/stopped-upgrade-054487/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-534078

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-534078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534078"

                                                
                                                
----------------------- debugLogs end: cilium-534078 [took: 3.433178632s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-534078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-534078
--- SKIP: TestNetworkPlugins/group/cilium (3.62s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-397628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-397628
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard