Test Report: Docker_Linux_crio 18333

                    
                      35bb0a6fdb2e8bad0653ad48b3d817d653ac2a3a:2024-03-08:33467
                    
                

Test fail (2/335)

Order failed test Duration
39 TestAddons/parallel/Ingress 153.97
45 TestAddons/parallel/Headlamp 2.6
x
+
TestAddons/parallel/Ingress (153.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-096357 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-096357 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-096357 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0955b4e5-78df-4640-a94f-839dd59390b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0955b4e5-78df-4640-a94f-839dd59390b8] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003598094s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-096357 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.629542476s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-096357 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-096357 addons disable ingress --alsologtostderr -v=1: (7.606763587s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-096357
helpers_test.go:235: (dbg) docker inspect addons-096357:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4",
	        "Created": "2024-03-08T02:58:29.99417869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1254261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-08T02:58:30.263144855Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4/hosts",
	        "LogPath": "/var/lib/docker/containers/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4-json.log",
	        "Name": "/addons-096357",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-096357:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-096357",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7e3d5895b991119050677bd5e655d981d7e2255ee5455b18b56c007ab493d742-init/diff:/var/lib/docker/overlay2/3c39ae14a1c3dc02177b83b99337c99805ac4a7cbb72dee66bd275c2d8550aff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e3d5895b991119050677bd5e655d981d7e2255ee5455b18b56c007ab493d742/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e3d5895b991119050677bd5e655d981d7e2255ee5455b18b56c007ab493d742/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e3d5895b991119050677bd5e655d981d7e2255ee5455b18b56c007ab493d742/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-096357",
	                "Source": "/var/lib/docker/volumes/addons-096357/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-096357",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-096357",
	                "name.minikube.sigs.k8s.io": "addons-096357",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a25790a701e11d33e37be29981229ab5a2f49309e200510e7b9444a77f79d84",
	            "SandboxKey": "/var/run/docker/netns/3a25790a701e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-096357": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d3046b665011",
	                        "addons-096357"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "edf7742f4235aff6e15e4039e9ba6ec6a24553437f3dfa636e1881254885f5b6",
	                    "EndpointID": "0856e47015e5551eae2f472204eabba205854e94e23839560fbd8c401942c424",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-096357",
	                        "d3046b665011"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-096357 -n addons-096357
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-096357 logs -n 25: (1.141759895s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-338197                                                                     | download-only-338197   | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| delete  | -p download-only-564762                                                                     | download-only-564762   | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| start   | --download-only -p                                                                          | download-docker-688312 | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC |                     |
	|         | download-docker-688312                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-688312                                                                   | download-docker-688312 | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-887601   | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC |                     |
	|         | binary-mirror-887601                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34547                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-887601                                                                     | binary-mirror-887601   | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| addons  | disable dashboard -p                                                                        | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC |                     |
	|         | addons-096357                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC |                     |
	|         | addons-096357                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-096357 --wait=true                                                                | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 03:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-096357 addons                                                                        | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | -p addons-096357                                                                            |                        |         |         |                     |                     |
	| addons  | addons-096357 addons disable                                                                | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-096357 ip                                                                            | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	| addons  | addons-096357 addons disable                                                                | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC |                     |
	|         | -p addons-096357                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | addons-096357                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-096357 ssh cat                                                                       | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | /opt/local-path-provisioner/pvc-30d0ffaf-920e-479b-bbb8-f54aaa1f5b7e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-096357 addons disable                                                                | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | addons-096357                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-096357 ssh curl -s                                                                   | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-096357 addons                                                                        | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:01 UTC | 08 Mar 24 03:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-096357 addons                                                                        | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:01 UTC | 08 Mar 24 03:01 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-096357 ip                                                                            | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:03 UTC | 08 Mar 24 03:03 UTC |
	| addons  | addons-096357 addons disable                                                                | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:03 UTC | 08 Mar 24 03:03 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-096357 addons disable                                                                | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:03 UTC | 08 Mar 24 03:03 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 02:58:07
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 02:58:07.848040 1253594 out.go:291] Setting OutFile to fd 1 ...
	I0308 02:58:07.848475 1253594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:58:07.848491 1253594 out.go:304] Setting ErrFile to fd 2...
	I0308 02:58:07.848499 1253594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:58:07.848976 1253594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 02:58:07.850219 1253594 out.go:298] Setting JSON to false
	I0308 02:58:07.851163 1253594 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":20434,"bootTime":1709846254,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 02:58:07.851230 1253594 start.go:139] virtualization: kvm guest
	I0308 02:58:07.853033 1253594 out.go:177] * [addons-096357] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 02:58:07.854711 1253594 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 02:58:07.854713 1253594 notify.go:220] Checking for updates...
	I0308 02:58:07.855965 1253594 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 02:58:07.857229 1253594 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	I0308 02:58:07.858471 1253594 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	I0308 02:58:07.859621 1253594 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 02:58:07.860801 1253594 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 02:58:07.862223 1253594 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 02:58:07.884570 1253594 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0308 02:58:07.884700 1253594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 02:58:07.932262 1253594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-03-08 02:58:07.923627422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 02:58:07.932375 1253594 docker.go:295] overlay module found
	I0308 02:58:07.934109 1253594 out.go:177] * Using the docker driver based on user configuration
	I0308 02:58:07.935296 1253594 start.go:297] selected driver: docker
	I0308 02:58:07.935308 1253594 start.go:901] validating driver "docker" against <nil>
	I0308 02:58:07.935320 1253594 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 02:58:07.936103 1253594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 02:58:07.984671 1253594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-03-08 02:58:07.975737838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 02:58:07.984906 1253594 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 02:58:07.985200 1253594 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 02:58:07.986860 1253594 out.go:177] * Using Docker driver with root privileges
	I0308 02:58:07.988352 1253594 cni.go:84] Creating CNI manager for ""
	I0308 02:58:07.988370 1253594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0308 02:58:07.988380 1253594 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0308 02:58:07.988443 1253594 start.go:340] cluster config:
	{Name:addons-096357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-096357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 02:58:07.989782 1253594 out.go:177] * Starting "addons-096357" primary control-plane node in "addons-096357" cluster
	I0308 02:58:07.991042 1253594 cache.go:121] Beginning downloading kic base image for docker with crio
	I0308 02:58:07.992305 1253594 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0308 02:58:07.993574 1253594 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 02:58:07.993641 1253594 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0308 02:58:07.993663 1253594 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 02:58:07.993675 1253594 cache.go:56] Caching tarball of preloaded images
	I0308 02:58:07.993813 1253594 preload.go:173] Found /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 02:58:07.993831 1253594 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 02:58:07.994177 1253594 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/config.json ...
	I0308 02:58:07.994204 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/config.json: {Name:mke782156128fe9cc35a3f03c9f28dfea004e045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:08.008624 1253594 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0308 02:58:08.008746 1253594 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0308 02:58:08.008761 1253594 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0308 02:58:08.008765 1253594 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0308 02:58:08.008773 1253594 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0308 02:58:08.008780 1253594 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0308 02:58:19.531438 1253594 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0308 02:58:19.531487 1253594 cache.go:194] Successfully downloaded all kic artifacts
	I0308 02:58:19.531522 1253594 start.go:360] acquireMachinesLock for addons-096357: {Name:mk08648cdaca399025e8f1d58c6c633983097f69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 02:58:19.531617 1253594 start.go:364] duration metric: took 74.055µs to acquireMachinesLock for "addons-096357"
	I0308 02:58:19.531641 1253594 start.go:93] Provisioning new machine with config: &{Name:addons-096357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-096357 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 02:58:19.531716 1253594 start.go:125] createHost starting for "" (driver="docker")
	I0308 02:58:19.533434 1253594 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0308 02:58:19.533704 1253594 start.go:159] libmachine.API.Create for "addons-096357" (driver="docker")
	I0308 02:58:19.533740 1253594 client.go:168] LocalClient.Create starting
	I0308 02:58:19.533835 1253594 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem
	I0308 02:58:19.817295 1253594 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/cert.pem
	I0308 02:58:19.882819 1253594 cli_runner.go:164] Run: docker network inspect addons-096357 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0308 02:58:19.898740 1253594 cli_runner.go:211] docker network inspect addons-096357 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0308 02:58:19.898824 1253594 network_create.go:281] running [docker network inspect addons-096357] to gather additional debugging logs...
	I0308 02:58:19.898845 1253594 cli_runner.go:164] Run: docker network inspect addons-096357
	W0308 02:58:19.913565 1253594 cli_runner.go:211] docker network inspect addons-096357 returned with exit code 1
	I0308 02:58:19.913620 1253594 network_create.go:284] error running [docker network inspect addons-096357]: docker network inspect addons-096357: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-096357 not found
	I0308 02:58:19.913635 1253594 network_create.go:286] output of [docker network inspect addons-096357]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-096357 not found
	
	** /stderr **
	I0308 02:58:19.913720 1253594 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0308 02:58:19.929171 1253594 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002737420}
	I0308 02:58:19.929226 1253594 network_create.go:124] attempt to create docker network addons-096357 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0308 02:58:19.929273 1253594 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-096357 addons-096357
	I0308 02:58:19.982091 1253594 network_create.go:108] docker network addons-096357 192.168.49.0/24 created
	I0308 02:58:19.982124 1253594 kic.go:121] calculated static IP "192.168.49.2" for the "addons-096357" container
	I0308 02:58:19.982194 1253594 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0308 02:58:19.996917 1253594 cli_runner.go:164] Run: docker volume create addons-096357 --label name.minikube.sigs.k8s.io=addons-096357 --label created_by.minikube.sigs.k8s.io=true
	I0308 02:58:20.013747 1253594 oci.go:103] Successfully created a docker volume addons-096357
	I0308 02:58:20.013839 1253594 cli_runner.go:164] Run: docker run --rm --name addons-096357-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-096357 --entrypoint /usr/bin/test -v addons-096357:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0308 02:58:24.838270 1253594 cli_runner.go:217] Completed: docker run --rm --name addons-096357-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-096357 --entrypoint /usr/bin/test -v addons-096357:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (4.824381579s)
	I0308 02:58:24.838314 1253594 oci.go:107] Successfully prepared a docker volume addons-096357
	I0308 02:58:24.838344 1253594 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 02:58:24.838369 1253594 kic.go:194] Starting extracting preloaded images to volume ...
	I0308 02:58:24.838435 1253594 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-096357:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0308 02:58:29.932305 1253594 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-096357:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (5.093827169s)
	I0308 02:58:29.932341 1253594 kic.go:203] duration metric: took 5.093967368s to extract preloaded images to volume ...
	W0308 02:58:29.932497 1253594 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0308 02:58:29.932686 1253594 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0308 02:58:29.979931 1253594 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-096357 --name addons-096357 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-096357 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-096357 --network addons-096357 --ip 192.168.49.2 --volume addons-096357:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0308 02:58:30.270716 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Running}}
	I0308 02:58:30.286272 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:30.303244 1253594 cli_runner.go:164] Run: docker exec addons-096357 stat /var/lib/dpkg/alternatives/iptables
	I0308 02:58:30.341325 1253594 oci.go:144] the created container "addons-096357" has a running status.
	I0308 02:58:30.341365 1253594 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa...
	I0308 02:58:30.579441 1253594 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0308 02:58:30.602870 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:30.617972 1253594 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0308 02:58:30.617997 1253594 kic_runner.go:114] Args: [docker exec --privileged addons-096357 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0308 02:58:30.663812 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:30.683727 1253594 machine.go:94] provisionDockerMachine start ...
	I0308 02:58:30.683860 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:30.699532 1253594 main.go:141] libmachine: Using SSH client type: native
	I0308 02:58:30.699745 1253594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0308 02:58:30.699758 1253594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 02:58:30.892825 1253594 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-096357
	
	I0308 02:58:30.892875 1253594 ubuntu.go:169] provisioning hostname "addons-096357"
	I0308 02:58:30.892934 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:30.909427 1253594 main.go:141] libmachine: Using SSH client type: native
	I0308 02:58:30.909635 1253594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0308 02:58:30.909656 1253594 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-096357 && echo "addons-096357" | sudo tee /etc/hostname
	I0308 02:58:31.031564 1253594 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-096357
	
	I0308 02:58:31.031655 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:31.047824 1253594 main.go:141] libmachine: Using SSH client type: native
	I0308 02:58:31.048016 1253594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0308 02:58:31.048032 1253594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-096357' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-096357/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-096357' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 02:58:31.161290 1253594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 02:58:31.161322 1253594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18333-1245188/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-1245188/.minikube}
	I0308 02:58:31.161349 1253594 ubuntu.go:177] setting up certificates
	I0308 02:58:31.161364 1253594 provision.go:84] configureAuth start
	I0308 02:58:31.161429 1253594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-096357
	I0308 02:58:31.176872 1253594 provision.go:143] copyHostCerts
	I0308 02:58:31.176945 1253594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.pem (1082 bytes)
	I0308 02:58:31.177055 1253594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-1245188/.minikube/cert.pem (1123 bytes)
	I0308 02:58:31.177105 1253594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-1245188/.minikube/key.pem (1679 bytes)
	I0308 02:58:31.177150 1253594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca-key.pem org=jenkins.addons-096357 san=[127.0.0.1 192.168.49.2 addons-096357 localhost minikube]
	I0308 02:58:31.383138 1253594 provision.go:177] copyRemoteCerts
	I0308 02:58:31.383203 1253594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 02:58:31.383238 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:31.399533 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:31.486393 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 02:58:31.508751 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 02:58:31.529939 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 02:58:31.550315 1253594 provision.go:87] duration metric: took 388.932588ms to configureAuth
	I0308 02:58:31.550344 1253594 ubuntu.go:193] setting minikube options for container-runtime
	I0308 02:58:31.550531 1253594 config.go:182] Loaded profile config "addons-096357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 02:58:31.550683 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:31.566020 1253594 main.go:141] libmachine: Using SSH client type: native
	I0308 02:58:31.566194 1253594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0308 02:58:31.566223 1253594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 02:58:31.765155 1253594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 02:58:31.765190 1253594 machine.go:97] duration metric: took 1.081433089s to provisionDockerMachine
	I0308 02:58:31.765200 1253594 client.go:171] duration metric: took 12.231452342s to LocalClient.Create
	I0308 02:58:31.765217 1253594 start.go:167] duration metric: took 12.231516885s to libmachine.API.Create "addons-096357"
	I0308 02:58:31.765224 1253594 start.go:293] postStartSetup for "addons-096357" (driver="docker")
	I0308 02:58:31.765234 1253594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 02:58:31.765295 1253594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 02:58:31.765326 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:31.781486 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:31.865864 1253594 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 02:58:31.868714 1253594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0308 02:58:31.868742 1253594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0308 02:58:31.868749 1253594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0308 02:58:31.868756 1253594 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0308 02:58:31.868768 1253594 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-1245188/.minikube/addons for local assets ...
	I0308 02:58:31.868820 1253594 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-1245188/.minikube/files for local assets ...
	I0308 02:58:31.868847 1253594 start.go:296] duration metric: took 103.617168ms for postStartSetup
	I0308 02:58:31.869090 1253594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-096357
	I0308 02:58:31.884969 1253594 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/config.json ...
	I0308 02:58:31.885240 1253594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 02:58:31.885298 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:31.901053 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:31.982312 1253594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0308 02:58:31.986472 1253594 start.go:128] duration metric: took 12.454740412s to createHost
	I0308 02:58:31.986501 1253594 start.go:83] releasing machines lock for "addons-096357", held for 12.454872218s
	I0308 02:58:31.986561 1253594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-096357
	I0308 02:58:32.001898 1253594 ssh_runner.go:195] Run: cat /version.json
	I0308 02:58:32.001938 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:32.002019 1253594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 02:58:32.002096 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:32.017726 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:32.018677 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:32.096906 1253594 ssh_runner.go:195] Run: systemctl --version
	I0308 02:58:32.165511 1253594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 02:58:32.303254 1253594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 02:58:32.307603 1253594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 02:58:32.325885 1253594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0308 02:58:32.325966 1253594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 02:58:32.352656 1253594 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0308 02:58:32.352682 1253594 start.go:494] detecting cgroup driver to use...
	I0308 02:58:32.352719 1253594 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0308 02:58:32.352770 1253594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 02:58:32.366492 1253594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 02:58:32.376307 1253594 docker.go:217] disabling cri-docker service (if available) ...
	I0308 02:58:32.376374 1253594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 02:58:32.388296 1253594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 02:58:32.400492 1253594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 02:58:32.473656 1253594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 02:58:32.549609 1253594 docker.go:233] disabling docker service ...
	I0308 02:58:32.549676 1253594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 02:58:32.568231 1253594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 02:58:32.578916 1253594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 02:58:32.649628 1253594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 02:58:32.725504 1253594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 02:58:32.735403 1253594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 02:58:32.749227 1253594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 02:58:32.749277 1253594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:58:32.757924 1253594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 02:58:32.757995 1253594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:58:32.766615 1253594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:58:32.775257 1253594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:58:32.783821 1253594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 02:58:32.791830 1253594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 02:58:32.799220 1253594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 02:58:32.806506 1253594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 02:58:32.884645 1253594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 02:58:32.994138 1253594 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 02:58:32.994234 1253594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 02:58:32.997562 1253594 start.go:562] Will wait 60s for crictl version
	I0308 02:58:32.997619 1253594 ssh_runner.go:195] Run: which crictl
	I0308 02:58:33.000631 1253594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 02:58:33.033339 1253594 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0308 02:58:33.033412 1253594 ssh_runner.go:195] Run: crio --version
	I0308 02:58:33.067476 1253594 ssh_runner.go:195] Run: crio --version
	I0308 02:58:33.100850 1253594 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0308 02:58:33.102328 1253594 cli_runner.go:164] Run: docker network inspect addons-096357 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0308 02:58:33.117786 1253594 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0308 02:58:33.121438 1253594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 02:58:33.131414 1253594 kubeadm.go:877] updating cluster {Name:addons-096357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-096357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 02:58:33.131568 1253594 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 02:58:33.131634 1253594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 02:58:33.186896 1253594 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 02:58:33.186920 1253594 crio.go:415] Images already preloaded, skipping extraction
	I0308 02:58:33.186962 1253594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 02:58:33.220288 1253594 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 02:58:33.220313 1253594 cache_images.go:84] Images are preloaded, skipping loading
	I0308 02:58:33.220321 1253594 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 crio true true} ...
	I0308 02:58:33.220424 1253594 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-096357 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-096357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 02:58:33.220507 1253594 ssh_runner.go:195] Run: crio config
	I0308 02:58:33.261219 1253594 cni.go:84] Creating CNI manager for ""
	I0308 02:58:33.261244 1253594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0308 02:58:33.261259 1253594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 02:58:33.261283 1253594 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-096357 NodeName:addons-096357 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 02:58:33.261444 1253594 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-096357"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 02:58:33.261509 1253594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 02:58:33.269735 1253594 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 02:58:33.269793 1253594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 02:58:33.277477 1253594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0308 02:58:33.292814 1253594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 02:58:33.308089 1253594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0308 02:58:33.323226 1253594 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0308 02:58:33.326350 1253594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 02:58:33.336647 1253594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 02:58:33.407007 1253594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 02:58:33.418978 1253594 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357 for IP: 192.168.49.2
	I0308 02:58:33.419008 1253594 certs.go:194] generating shared ca certs ...
	I0308 02:58:33.419032 1253594 certs.go:226] acquiring lock for ca certs: {Name:mkab513412908ef55b41438557e8ea33978e0150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:33.419157 1253594 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.key
	I0308 02:58:33.764377 1253594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.crt ...
	I0308 02:58:33.764414 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.crt: {Name:mka3abcc00eaaf2abc6c06778272723fa7615945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:33.764586 1253594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.key ...
	I0308 02:58:33.764598 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.key: {Name:mk2d876913085796b5af769962cc6e24b1f16b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:33.764666 1253594 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.key
	I0308 02:58:33.836453 1253594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.crt ...
	I0308 02:58:33.836482 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.crt: {Name:mk84288a8eaed2375f49d6c0702b43bd4f5c08ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:33.836629 1253594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.key ...
	I0308 02:58:33.836640 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.key: {Name:mk18a23a1ec68618d9d32fb1cb6aa4af87e06bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:33.836705 1253594 certs.go:256] generating profile certs ...
	I0308 02:58:33.836769 1253594 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.key
	I0308 02:58:33.836789 1253594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt with IP's: []
	I0308 02:58:34.103290 1253594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt ...
	I0308 02:58:34.103326 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: {Name:mk540ccba3f8251447117c2919f6bba9c6c31dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.103493 1253594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.key ...
	I0308 02:58:34.103503 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.key: {Name:mk937b0016eecc00549e496706148abe75501d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.103568 1253594 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key.89614b59
	I0308 02:58:34.103588 1253594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt.89614b59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0308 02:58:34.162869 1253594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt.89614b59 ...
	I0308 02:58:34.162900 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt.89614b59: {Name:mk41617eb99b5efb56b48af803be0936320366cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.163041 1253594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key.89614b59 ...
	I0308 02:58:34.163054 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key.89614b59: {Name:mkc81ffeb0f280669973fcf28eda5e7e70cc6351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.163121 1253594 certs.go:381] copying /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt.89614b59 -> /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt
	I0308 02:58:34.163212 1253594 certs.go:385] copying /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key.89614b59 -> /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key
	I0308 02:58:34.163264 1253594 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.key
	I0308 02:58:34.163283 1253594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.crt with IP's: []
	I0308 02:58:34.295369 1253594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.crt ...
	I0308 02:58:34.295405 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.crt: {Name:mk3787e49d2d9f91f7f45d0abf562d6918d02c77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.295571 1253594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.key ...
	I0308 02:58:34.295585 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.key: {Name:mk4348ae0f9fa4fe8d97588fc81878bf07ec4239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.295754 1253594 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 02:58:34.295800 1253594 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem (1082 bytes)
	I0308 02:58:34.295828 1253594 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/cert.pem (1123 bytes)
	I0308 02:58:34.295852 1253594 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/key.pem (1679 bytes)
	I0308 02:58:34.296475 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 02:58:34.319215 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 02:58:34.340125 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 02:58:34.361144 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 02:58:34.382161 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0308 02:58:34.402963 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 02:58:34.423356 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 02:58:34.443559 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 02:58:34.463629 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 02:58:34.483538 1253594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 02:58:34.498359 1253594 ssh_runner.go:195] Run: openssl version
	I0308 02:58:34.503073 1253594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 02:58:34.510964 1253594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 02:58:34.513997 1253594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:58 /usr/share/ca-certificates/minikubeCA.pem
	I0308 02:58:34.514047 1253594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 02:58:34.520066 1253594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 02:58:34.527966 1253594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 02:58:34.530816 1253594 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 02:58:34.530895 1253594 kubeadm.go:391] StartCluster: {Name:addons-096357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-096357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 02:58:34.531016 1253594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 02:58:34.531058 1253594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 02:58:34.563954 1253594 cri.go:89] found id: ""
	I0308 02:58:34.564043 1253594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 02:58:34.572382 1253594 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 02:58:34.580231 1253594 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0308 02:58:34.580285 1253594 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 02:58:34.587874 1253594 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 02:58:34.587892 1253594 kubeadm.go:156] found existing configuration files:
	
	I0308 02:58:34.587939 1253594 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 02:58:34.595132 1253594 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 02:58:34.595179 1253594 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 02:58:34.602351 1253594 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 02:58:34.609545 1253594 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 02:58:34.609651 1253594 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 02:58:34.616510 1253594 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 02:58:34.623708 1253594 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 02:58:34.623745 1253594 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 02:58:34.630647 1253594 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 02:58:34.637675 1253594 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 02:58:34.637724 1253594 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 02:58:34.644523 1253594 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0308 02:58:34.683028 1253594 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 02:58:34.683140 1253594 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 02:58:34.717300 1253594 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0308 02:58:34.717364 1253594 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1053-gcp
	I0308 02:58:34.717392 1253594 kubeadm.go:309] OS: Linux
	I0308 02:58:34.717431 1253594 kubeadm.go:309] CGROUPS_CPU: enabled
	I0308 02:58:34.717519 1253594 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0308 02:58:34.717612 1253594 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0308 02:58:34.717662 1253594 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0308 02:58:34.717711 1253594 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0308 02:58:34.717752 1253594 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0308 02:58:34.717833 1253594 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0308 02:58:34.717911 1253594 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0308 02:58:34.717980 1253594 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0308 02:58:34.776002 1253594 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 02:58:34.776152 1253594 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 02:58:34.776260 1253594 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 02:58:34.970445 1253594 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 02:58:34.973705 1253594 out.go:204]   - Generating certificates and keys ...
	I0308 02:58:34.973810 1253594 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 02:58:34.973887 1253594 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 02:58:35.139783 1253594 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 02:58:35.231744 1253594 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 02:58:35.434839 1253594 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 02:58:35.485293 1253594 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 02:58:35.660888 1253594 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 02:58:35.661038 1253594 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-096357 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0308 02:58:35.923158 1253594 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 02:58:35.923299 1253594 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-096357 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0308 02:58:36.220423 1253594 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 02:58:36.334641 1253594 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 02:58:36.664870 1253594 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 02:58:36.664963 1253594 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 02:58:36.760600 1253594 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 02:58:37.098056 1253594 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 02:58:37.197441 1253594 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 02:58:37.411928 1253594 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 02:58:37.412397 1253594 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 02:58:37.414705 1253594 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 02:58:37.417485 1253594 out.go:204]   - Booting up control plane ...
	I0308 02:58:37.417579 1253594 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 02:58:37.417691 1253594 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 02:58:37.417798 1253594 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 02:58:37.425693 1253594 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 02:58:37.426593 1253594 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 02:58:37.426633 1253594 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 02:58:37.507097 1253594 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 02:58:42.009468 1253594 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502414 seconds
	I0308 02:58:42.009673 1253594 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 02:58:42.020211 1253594 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 02:58:42.539432 1253594 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 02:58:42.539708 1253594 kubeadm.go:309] [mark-control-plane] Marking the node addons-096357 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 02:58:43.048435 1253594 kubeadm.go:309] [bootstrap-token] Using token: r50gpi.1afv8oc1kcg79288
	I0308 02:58:43.049984 1253594 out.go:204]   - Configuring RBAC rules ...
	I0308 02:58:43.050146 1253594 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 02:58:43.054368 1253594 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 02:58:43.072256 1253594 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 02:58:43.074854 1253594 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 02:58:43.077374 1253594 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 02:58:43.080695 1253594 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 02:58:43.089951 1253594 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 02:58:43.285931 1253594 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 02:58:43.458398 1253594 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 02:58:43.459328 1253594 kubeadm.go:309] 
	I0308 02:58:43.459466 1253594 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 02:58:43.459487 1253594 kubeadm.go:309] 
	I0308 02:58:43.459589 1253594 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 02:58:43.459599 1253594 kubeadm.go:309] 
	I0308 02:58:43.459631 1253594 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 02:58:43.459725 1253594 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 02:58:43.459830 1253594 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 02:58:43.459853 1253594 kubeadm.go:309] 
	I0308 02:58:43.459929 1253594 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 02:58:43.459939 1253594 kubeadm.go:309] 
	I0308 02:58:43.459998 1253594 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 02:58:43.460028 1253594 kubeadm.go:309] 
	I0308 02:58:43.460126 1253594 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 02:58:43.460237 1253594 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 02:58:43.460346 1253594 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 02:58:43.460361 1253594 kubeadm.go:309] 
	I0308 02:58:43.460460 1253594 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 02:58:43.460555 1253594 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 02:58:43.460564 1253594 kubeadm.go:309] 
	I0308 02:58:43.460653 1253594 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r50gpi.1afv8oc1kcg79288 \
	I0308 02:58:43.460783 1253594 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1cff8f068d2bc9c711e0cbd73acfe61141d16836e3de4386ac9d96e369e769fb \
	I0308 02:58:43.460826 1253594 kubeadm.go:309] 	--control-plane 
	I0308 02:58:43.460844 1253594 kubeadm.go:309] 
	I0308 02:58:43.460967 1253594 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 02:58:43.460977 1253594 kubeadm.go:309] 
	I0308 02:58:43.461071 1253594 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r50gpi.1afv8oc1kcg79288 \
	I0308 02:58:43.461194 1253594 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1cff8f068d2bc9c711e0cbd73acfe61141d16836e3de4386ac9d96e369e769fb 
	I0308 02:58:43.462993 1253594 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-gcp\n", err: exit status 1
	I0308 02:58:43.463095 1253594 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 02:58:43.463134 1253594 cni.go:84] Creating CNI manager for ""
	I0308 02:58:43.463156 1253594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0308 02:58:43.464944 1253594 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0308 02:58:43.466304 1253594 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0308 02:58:43.470695 1253594 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 02:58:43.470716 1253594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0308 02:58:43.488435 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 02:58:44.219423 1253594 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 02:58:44.219499 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:44.219515 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-096357 minikube.k8s.io/updated_at=2024_03_08T02_58_44_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=addons-096357 minikube.k8s.io/primary=true
	I0308 02:58:44.284823 1253594 ops.go:34] apiserver oom_adj: -16
	I0308 02:58:44.284902 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:44.785189 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:45.285616 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:45.784917 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:46.285240 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:46.785423 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:47.285364 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:47.785786 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:48.285892 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:48.785032 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:49.285431 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:49.784958 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:50.285031 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:50.785687 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:51.285634 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:51.785141 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:52.285471 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:52.785147 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:53.285630 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:53.785295 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:54.285801 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:54.785060 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:55.285647 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:55.785680 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:56.284959 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:56.352249 1253594 kubeadm.go:1106] duration metric: took 12.132802552s to wait for elevateKubeSystemPrivileges
	W0308 02:58:56.352299 1253594 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 02:58:56.352309 1253594 kubeadm.go:393] duration metric: took 21.821420973s to StartCluster
	I0308 02:58:56.352333 1253594 settings.go:142] acquiring lock: {Name:mke0ce76fc205916bb79eabaf8ed113e38eddf4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:56.352467 1253594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-1245188/kubeconfig
	I0308 02:58:56.353035 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/kubeconfig: {Name:mk98e1f656e06fac7ff6c69fb4148cf4fd3984bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:56.353289 1253594 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 02:58:56.353359 1253594 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 02:58:56.354629 1253594 out.go:177] * Verifying Kubernetes components...
	I0308 02:58:56.353429 1253594 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0308 02:58:56.353550 1253594 config.go:182] Loaded profile config "addons-096357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 02:58:56.356164 1253594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 02:58:56.356184 1253594 addons.go:69] Setting yakd=true in profile "addons-096357"
	I0308 02:58:56.356222 1253594 addons.go:234] Setting addon yakd=true in "addons-096357"
	I0308 02:58:56.356247 1253594 addons.go:69] Setting ingress-dns=true in profile "addons-096357"
	I0308 02:58:56.356262 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.356292 1253594 addons.go:234] Setting addon ingress-dns=true in "addons-096357"
	I0308 02:58:56.356342 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.356608 1253594 addons.go:69] Setting default-storageclass=true in profile "addons-096357"
	I0308 02:58:56.356647 1253594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-096357"
	I0308 02:58:56.356856 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.356871 1253594 addons.go:69] Setting gcp-auth=true in profile "addons-096357"
	I0308 02:58:56.356894 1253594 mustload.go:65] Loading cluster: addons-096357
	I0308 02:58:56.356920 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.357088 1253594 config.go:182] Loaded profile config "addons-096357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 02:58:56.357418 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.357646 1253594 addons.go:69] Setting registry=true in profile "addons-096357"
	I0308 02:58:56.357647 1253594 addons.go:69] Setting helm-tiller=true in profile "addons-096357"
	I0308 02:58:56.357687 1253594 addons.go:234] Setting addon registry=true in "addons-096357"
	I0308 02:58:56.357688 1253594 addons.go:234] Setting addon helm-tiller=true in "addons-096357"
	I0308 02:58:56.357687 1253594 addons.go:69] Setting metrics-server=true in profile "addons-096357"
	I0308 02:58:56.357719 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.357719 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.357728 1253594 addons.go:234] Setting addon metrics-server=true in "addons-096357"
	I0308 02:58:56.357764 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.358169 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.358180 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.358231 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.358421 1253594 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-096357"
	I0308 02:58:56.358464 1253594 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-096357"
	I0308 02:58:56.358504 1253594 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-096357"
	I0308 02:58:56.358598 1253594 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-096357"
	I0308 02:58:56.358642 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.358724 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.359125 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.359736 1253594 addons.go:69] Setting storage-provisioner=true in profile "addons-096357"
	I0308 02:58:56.359778 1253594 addons.go:234] Setting addon storage-provisioner=true in "addons-096357"
	I0308 02:58:56.359809 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.360310 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.356863 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.361203 1253594 addons.go:69] Setting ingress=true in profile "addons-096357"
	I0308 02:58:56.362159 1253594 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-096357"
	I0308 02:58:56.362194 1253594 addons.go:69] Setting cloud-spanner=true in profile "addons-096357"
	I0308 02:58:56.369819 1253594 addons.go:234] Setting addon cloud-spanner=true in "addons-096357"
	I0308 02:58:56.369901 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.370416 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.370549 1253594 addons.go:234] Setting addon ingress=true in "addons-096357"
	I0308 02:58:56.370673 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.371191 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.362214 1253594 addons.go:69] Setting inspektor-gadget=true in profile "addons-096357"
	I0308 02:58:56.372101 1253594 addons.go:234] Setting addon inspektor-gadget=true in "addons-096357"
	I0308 02:58:56.372155 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.362229 1253594 addons.go:69] Setting volumesnapshots=true in profile "addons-096357"
	I0308 02:58:56.372278 1253594 addons.go:234] Setting addon volumesnapshots=true in "addons-096357"
	I0308 02:58:56.372365 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.372626 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.372899 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.380005 1253594 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-096357"
	I0308 02:58:56.384583 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.385294 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.397248 1253594 addons.go:234] Setting addon default-storageclass=true in "addons-096357"
	I0308 02:58:56.397310 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.397824 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.400705 1253594 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0308 02:58:56.402252 1253594 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0308 02:58:56.402277 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0308 02:58:56.402473 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.404351 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0308 02:58:56.405577 1253594 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0308 02:58:56.406826 1253594 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0308 02:58:56.406849 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0308 02:58:56.406909 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.408301 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0308 02:58:56.409376 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0308 02:58:56.410921 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0308 02:58:56.412907 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0308 02:58:56.414764 1253594 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0308 02:58:56.416472 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0308 02:58:56.416416 1253594 out.go:177]   - Using image docker.io/registry:2.8.3
	I0308 02:58:56.417909 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0308 02:58:56.421281 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0308 02:58:56.422681 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0308 02:58:56.422704 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0308 02:58:56.422775 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.421678 1253594 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0308 02:58:56.423020 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0308 02:58:56.423075 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.421753 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.425339 1253594 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0308 02:58:56.426635 1253594 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 02:58:56.426655 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 02:58:56.426707 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.434229 1253594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 02:58:56.429011 1253594 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-096357"
	I0308 02:58:56.436311 1253594 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 02:58:56.436417 1253594 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0308 02:58:56.436464 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.438159 1253594 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0308 02:58:56.441523 1253594 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0308 02:58:56.441542 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0308 02:58:56.444117 1253594 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0308 02:58:56.444138 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0308 02:58:56.444187 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.439531 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 02:58:56.444258 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.439543 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0308 02:58:56.440024 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.441750 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.445657 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0308 02:58:56.445676 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0308 02:58:56.445732 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.446487 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.454901 1253594 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0308 02:58:56.452832 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.459507 1253594 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0308 02:58:56.461217 1253594 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0308 02:58:56.462796 1253594 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0308 02:58:56.462826 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0308 02:58:56.462893 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.486197 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.486285 1253594 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 02:58:56.486308 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 02:58:56.486363 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.487023 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.493893 1253594 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0308 02:58:56.495333 1253594 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0308 02:58:56.496604 1253594 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0308 02:58:56.496619 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0308 02:58:56.496670 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.498008 1253594 out.go:177]   - Using image docker.io/busybox:stable
	I0308 02:58:56.497826 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.499737 1253594 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0308 02:58:56.499761 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0308 02:58:56.499819 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.508225 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.509641 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.512953 1253594 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0308 02:58:56.511928 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.513744 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.514495 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0308 02:58:56.514510 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0308 02:58:56.514565 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.519733 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.521087 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.522689 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.525946 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.531614 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	W0308 02:58:56.541806 1253594 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0308 02:58:56.541841 1253594 retry.go:31] will retry after 323.578107ms: ssh: handshake failed: EOF
	I0308 02:58:56.758401 1253594 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 02:58:56.946182 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0308 02:58:56.952307 1253594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 02:58:57.035227 1253594 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0308 02:58:57.035266 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0308 02:58:57.036576 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0308 02:58:57.040520 1253594 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 02:58:57.040550 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0308 02:58:57.042575 1253594 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0308 02:58:57.042598 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0308 02:58:57.055668 1253594 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0308 02:58:57.055704 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0308 02:58:57.135860 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0308 02:58:57.136789 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0308 02:58:57.150346 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 02:58:57.241752 1253594 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0308 02:58:57.241790 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0308 02:58:57.242115 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 02:58:57.246161 1253594 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 02:58:57.246191 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 02:58:57.252799 1253594 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0308 02:58:57.252838 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0308 02:58:57.256559 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0308 02:58:57.256585 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0308 02:58:57.336339 1253594 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0308 02:58:57.336371 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0308 02:58:57.336679 1253594 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0308 02:58:57.336707 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0308 02:58:57.343923 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0308 02:58:57.343954 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0308 02:58:57.544702 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0308 02:58:57.552246 1253594 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 02:58:57.552296 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 02:58:57.635254 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0308 02:58:57.646145 1253594 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0308 02:58:57.646201 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0308 02:58:57.647795 1253594 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0308 02:58:57.647835 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0308 02:58:57.737005 1253594 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0308 02:58:57.737097 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0308 02:58:57.742680 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0308 02:58:57.742763 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0308 02:58:57.846563 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 02:58:57.848831 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0308 02:58:57.848860 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0308 02:58:57.936241 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0308 02:58:57.936277 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0308 02:58:57.936471 1253594 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0308 02:58:57.936489 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0308 02:58:58.142166 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0308 02:58:58.142202 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0308 02:58:58.153077 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0308 02:58:58.153108 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0308 02:58:58.335362 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0308 02:58:58.335398 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0308 02:58:58.338706 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0308 02:58:58.343019 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0308 02:58:58.538295 1253594 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0308 02:58:58.538328 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0308 02:58:58.555214 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0308 02:58:58.555253 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0308 02:58:58.734334 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0308 02:58:58.734425 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0308 02:58:58.854642 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0308 02:58:58.854675 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0308 02:58:59.038041 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0308 02:58:59.039349 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0308 02:58:59.039378 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0308 02:58:59.240225 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0308 02:58:59.240264 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0308 02:58:59.335819 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0308 02:58:59.335923 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0308 02:58:59.343378 1253594 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.584913952s)
	I0308 02:58:59.343489 1253594 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0308 02:58:59.555123 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0308 02:58:59.555219 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0308 02:58:59.752243 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0308 02:58:59.841841 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0308 02:58:59.841941 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0308 02:59:00.054948 1253594 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-096357" context rescaled to 1 replicas
	I0308 02:59:00.238060 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0308 02:59:00.238092 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0308 02:59:00.549322 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.603098859s)
	I0308 02:59:00.549244 1253594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.596893419s)
	I0308 02:59:00.550646 1253594 node_ready.go:35] waiting up to 6m0s for node "addons-096357" to be "Ready" ...
	I0308 02:59:00.652490 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0308 02:59:00.652588 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0308 02:59:01.152117 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0308 02:59:02.643845 1253594 node_ready.go:53] node "addons-096357" has status "Ready":"False"
	I0308 02:59:03.146594 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.109971893s)
	I0308 02:59:03.146864 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.010424341s)
	I0308 02:59:03.245769 1253594 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0308 02:59:03.245933 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:59:03.267540 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:59:03.742737 1253594 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0308 02:59:03.849102 1253594 addons.go:234] Setting addon gcp-auth=true in "addons-096357"
	I0308 02:59:03.849174 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:59:03.849753 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:59:03.868818 1253594 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0308 02:59:03.868871 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:59:03.888759 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:59:04.342520 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.205684212s)
	I0308 02:59:04.342563 1253594 addons.go:470] Verifying addon ingress=true in "addons-096357"
	I0308 02:59:04.342561 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.192170751s)
	I0308 02:59:04.342620 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.100460642s)
	I0308 02:59:04.344031 1253594 out.go:177] * Verifying ingress addon...
	I0308 02:59:04.342667 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.797887363s)
	I0308 02:59:04.342712 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.707424475s)
	I0308 02:59:04.342792 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.496192304s)
	I0308 02:59:04.342873 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.004121014s)
	I0308 02:59:04.342911 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.99984952s)
	I0308 02:59:04.343007 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.304928048s)
	I0308 02:59:04.343163 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.590801598s)
	I0308 02:59:04.345237 1253594 addons.go:470] Verifying addon registry=true in "addons-096357"
	I0308 02:59:04.346457 1253594 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-096357 service yakd-dashboard -n yakd-dashboard
	
	I0308 02:59:04.345346 1253594 addons.go:470] Verifying addon metrics-server=true in "addons-096357"
	W0308 02:59:04.345384 1253594 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0308 02:59:04.346233 1253594 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0308 02:59:04.347710 1253594 out.go:177] * Verifying registry addon...
	I0308 02:59:04.347844 1253594 retry.go:31] will retry after 359.745349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0308 02:59:04.350002 1253594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0308 02:59:04.353518 1253594 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0308 02:59:04.353546 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:04.353851 1253594 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0308 02:59:04.353871 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:04.709495 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0308 02:59:04.851190 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:04.853194 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:05.053623 1253594 node_ready.go:53] node "addons-096357" has status "Ready":"False"
	I0308 02:59:05.178437 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.02620209s)
	I0308 02:59:05.178505 1253594 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.309650717s)
	I0308 02:59:05.178528 1253594 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-096357"
	I0308 02:59:05.180109 1253594 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0308 02:59:05.181629 1253594 out.go:177] * Verifying csi-hostpath-driver addon...
	I0308 02:59:05.182811 1253594 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0308 02:59:05.183917 1253594 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0308 02:59:05.183936 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0308 02:59:05.183386 1253594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0308 02:59:05.240511 1253594 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0308 02:59:05.240536 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:05.254541 1253594 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0308 02:59:05.254566 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0308 02:59:05.272370 1253594 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0308 02:59:05.272397 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0308 02:59:05.288558 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0308 02:59:05.353011 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:05.354836 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:05.740689 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:05.854808 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:05.855311 1253594 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0308 02:59:05.855383 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:06.055894 1253594 node_ready.go:49] node "addons-096357" has status "Ready":"True"
	I0308 02:59:06.055987 1253594 node_ready.go:38] duration metric: took 5.505259164s for node "addons-096357" to be "Ready" ...
	I0308 02:59:06.056005 1253594 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 02:59:06.064816 1253594 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gfwfq" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:06.248505 1253594 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0308 02:59:06.248546 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:06.352461 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:06.355857 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:06.665035 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.955485681s)
	I0308 02:59:06.739911 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:06.854847 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:06.856378 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.567778167s)
	I0308 02:59:06.857453 1253594 addons.go:470] Verifying addon gcp-auth=true in "addons-096357"
	I0308 02:59:06.859058 1253594 out.go:177] * Verifying gcp-auth addon...
	I0308 02:59:06.860349 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:06.861436 1253594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0308 02:59:06.935086 1253594 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0308 02:59:06.935128 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:07.243678 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:07.436035 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:07.438350 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:07.438730 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:07.739276 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:07.852823 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:07.855981 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:07.864819 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:08.140991 1253594 pod_ready.go:102] pod "coredns-5dd5756b68-gfwfq" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:08.238729 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:08.353405 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:08.356932 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:08.435981 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:08.739230 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:08.852720 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:08.854868 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:08.864779 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:09.071062 1253594 pod_ready.go:92] pod "coredns-5dd5756b68-gfwfq" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.071107 1253594 pod_ready.go:81] duration metric: took 3.00626175s for pod "coredns-5dd5756b68-gfwfq" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.071140 1253594 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.075992 1253594 pod_ready.go:92] pod "etcd-addons-096357" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.076018 1253594 pod_ready.go:81] duration metric: took 4.856994ms for pod "etcd-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.076034 1253594 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.138728 1253594 pod_ready.go:92] pod "kube-apiserver-addons-096357" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.138762 1253594 pod_ready.go:81] duration metric: took 62.718253ms for pod "kube-apiserver-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.138778 1253594 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.144575 1253594 pod_ready.go:92] pod "kube-controller-manager-addons-096357" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.144605 1253594 pod_ready.go:81] duration metric: took 5.81623ms for pod "kube-controller-manager-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.144620 1253594 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9q92q" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.149876 1253594 pod_ready.go:92] pod "kube-proxy-9q92q" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.149895 1253594 pod_ready.go:81] duration metric: took 5.268604ms for pod "kube-proxy-9q92q" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.149904 1253594 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.239860 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:09.352758 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:09.355223 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:09.365520 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:09.468393 1253594 pod_ready.go:92] pod "kube-scheduler-addons-096357" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.468425 1253594 pod_ready.go:81] duration metric: took 318.513376ms for pod "kube-scheduler-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.468439 1253594 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.739549 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:09.853069 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:09.855460 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:09.864950 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:10.190177 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:10.351928 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:10.354537 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:10.364910 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:10.690095 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:10.852473 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:10.854221 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:10.864190 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:11.189534 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:11.353055 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:11.355378 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:11.364502 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:11.474666 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:11.689638 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:11.851980 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:11.854210 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:11.864220 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:12.188753 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:12.351568 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:12.353745 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:12.364938 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:12.689642 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:12.851158 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:12.853871 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:12.864639 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:13.191768 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:13.351759 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:13.354046 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:13.363804 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:13.688923 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:13.852164 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:13.854071 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:13.863849 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:13.973182 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:14.188481 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:14.352600 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:14.355001 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:14.363854 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:14.689798 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:14.852522 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:14.854927 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:14.863890 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:15.189313 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:15.352512 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:15.354896 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:15.364589 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:15.689481 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:15.852471 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:15.854686 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:15.864621 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:16.188930 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:16.352348 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:16.354883 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:16.364051 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:16.473955 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:16.689771 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:16.851725 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:16.853852 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:16.864774 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:17.239423 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:17.353096 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:17.356531 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:17.369560 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:17.738328 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:17.853156 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:17.854405 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:17.864557 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:18.238701 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:18.353182 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:18.355271 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:18.364909 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:18.474941 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:18.689674 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:18.852740 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:18.855785 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:18.865338 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:19.189651 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:19.353364 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:19.355344 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:19.364738 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:19.739543 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:19.853465 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:19.855088 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:19.864517 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:20.189523 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:20.352183 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:20.354963 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:20.364107 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:20.690167 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:20.852546 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:20.855935 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:20.864056 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:20.974521 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:21.191466 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:21.352200 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:21.355300 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:21.364570 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:21.689385 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:21.852242 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:21.854202 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:21.863966 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:22.189722 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:22.352186 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:22.354347 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:22.364093 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:22.689029 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:22.851896 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:22.854171 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:22.864198 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:22.974652 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:23.193095 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:23.352172 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:23.354694 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:23.365009 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:23.690763 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:23.852746 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:23.854407 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:23.864199 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:24.189520 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:24.351220 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:24.353956 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:24.364327 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:24.689616 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:24.852743 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:24.855808 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:24.865122 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:24.974936 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:25.237573 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:25.353047 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:25.361744 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:25.364845 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:25.739641 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:25.854558 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:25.855077 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:25.864425 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:26.190332 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:26.353082 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:26.354744 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:26.364973 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:26.689615 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:26.852911 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:26.854914 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:26.865315 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:26.975072 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:27.191231 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:27.352488 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:27.354885 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:27.364418 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:27.689307 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:27.853080 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:27.855564 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:27.865057 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:28.237501 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:28.353156 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:28.354881 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:28.365119 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:28.689865 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:28.852847 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:28.855021 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:28.864841 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:29.189927 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:29.351963 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:29.354952 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:29.364911 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:29.473999 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:29.689870 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:29.852216 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:29.854441 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:29.864807 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:30.189713 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:30.352056 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:30.354372 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:30.364295 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:30.688765 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:30.851688 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:30.854150 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:30.863932 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:31.190934 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:31.364454 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:31.368530 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:31.369641 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:31.689947 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:31.852249 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:31.854376 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:31.864497 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:31.975113 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:32.235431 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:32.353050 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:32.355733 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:32.364810 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:32.689083 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:32.852941 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:32.854004 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:32.864095 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:33.190271 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:33.352452 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:33.355269 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:33.365648 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:33.689189 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:33.851921 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:33.855198 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:33.864904 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:34.190366 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:34.352499 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:34.354786 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:34.364962 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:34.517118 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:34.689437 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:34.853031 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:34.855727 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:34.865340 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:35.037183 1253594 pod_ready.go:92] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:35.037290 1253594 pod_ready.go:81] duration metric: took 25.568839486s for pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:35.037324 1253594 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5zvrf" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:35.044762 1253594 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-5zvrf" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:35.044783 1253594 pod_ready.go:81] duration metric: took 7.428266ms for pod "nvidia-device-plugin-daemonset-5zvrf" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:35.044802 1253594 pod_ready.go:38] duration metric: took 28.988777085s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 02:59:35.044822 1253594 api_server.go:52] waiting for apiserver process to appear ...
	I0308 02:59:35.044880 1253594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 02:59:35.058905 1253594 api_server.go:72] duration metric: took 38.705497091s to wait for apiserver process to appear ...
	I0308 02:59:35.058935 1253594 api_server.go:88] waiting for apiserver healthz status ...
	I0308 02:59:35.058961 1253594 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0308 02:59:35.134776 1253594 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0308 02:59:35.136679 1253594 api_server.go:141] control plane version: v1.28.4
	I0308 02:59:35.136762 1253594 api_server.go:131] duration metric: took 77.816353ms to wait for apiserver health ...
	I0308 02:59:35.136786 1253594 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 02:59:35.151096 1253594 system_pods.go:59] 19 kube-system pods found
	I0308 02:59:35.151135 1253594 system_pods.go:61] "coredns-5dd5756b68-gfwfq" [e9e8987b-e511-4f9c-8eb9-92d73278f1a7] Running
	I0308 02:59:35.151145 1253594 system_pods.go:61] "csi-hostpath-attacher-0" [98bb39ee-61e2-47c8-9002-b9865e09e7ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0308 02:59:35.151151 1253594 system_pods.go:61] "csi-hostpath-resizer-0" [a0d8e923-1a02-49be-9da7-cf326b0e555a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0308 02:59:35.151159 1253594 system_pods.go:61] "csi-hostpathplugin-5f6b6" [e11b4aef-50fe-4d0c-ab0a-662cae2679ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0308 02:59:35.151163 1253594 system_pods.go:61] "etcd-addons-096357" [1f7777eb-f55e-49b0-9e0c-6cefc68eab78] Running
	I0308 02:59:35.151167 1253594 system_pods.go:61] "kindnet-2ssjr" [0c3f19c9-bb60-4e8f-9ad6-28624f7b09df] Running
	I0308 02:59:35.151170 1253594 system_pods.go:61] "kube-apiserver-addons-096357" [136bd4e5-d6f7-440c-8429-f5c922d56721] Running
	I0308 02:59:35.151174 1253594 system_pods.go:61] "kube-controller-manager-addons-096357" [2d4f8d7e-f23b-42be-a464-11d943e64069] Running
	I0308 02:59:35.151180 1253594 system_pods.go:61] "kube-ingress-dns-minikube" [178e20ad-93cf-4745-86ce-7befaf053f24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0308 02:59:35.151186 1253594 system_pods.go:61] "kube-proxy-9q92q" [c4dcca8b-90da-4ee6-bb23-f5e5d9e52672] Running
	I0308 02:59:35.151190 1253594 system_pods.go:61] "kube-scheduler-addons-096357" [bd71011b-c413-4c76-b15a-d8946d6fa08a] Running
	I0308 02:59:35.151196 1253594 system_pods.go:61] "metrics-server-69cf46c98-tg6kt" [df94e650-b701-42b9-9c86-8d5351621dcb] Running
	I0308 02:59:35.151201 1253594 system_pods.go:61] "nvidia-device-plugin-daemonset-5zvrf" [0c58fef2-eb9d-48b2-9e64-3481e5407cb2] Running
	I0308 02:59:35.151214 1253594 system_pods.go:61] "registry-6xbnd" [c865bced-8d68-4fe9-9b58-a387fa5d841b] Running
	I0308 02:59:35.151224 1253594 system_pods.go:61] "registry-proxy-b28lv" [53d1e743-7dde-45d5-8caa-7ac196b37d07] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0308 02:59:35.151235 1253594 system_pods.go:61] "snapshot-controller-58dbcc7b99-kkgrn" [0caaf65c-0b3f-4b9e-bbbd-0b9f36217a9e] Running
	I0308 02:59:35.151243 1253594 system_pods.go:61] "snapshot-controller-58dbcc7b99-x89gg" [af8add60-5aaf-4f9e-ad90-e3ba90083d94] Running
	I0308 02:59:35.151247 1253594 system_pods.go:61] "storage-provisioner" [34b1c6c0-cbf8-4e11-a72a-d0c4c2483cb1] Running
	I0308 02:59:35.151253 1253594 system_pods.go:61] "tiller-deploy-7b677967b9-c22n7" [f7d4183c-77c2-4528-b752-df447610d59d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0308 02:59:35.151261 1253594 system_pods.go:74] duration metric: took 14.458313ms to wait for pod list to return data ...
	I0308 02:59:35.151274 1253594 default_sa.go:34] waiting for default service account to be created ...
	I0308 02:59:35.156258 1253594 default_sa.go:45] found service account: "default"
	I0308 02:59:35.156289 1253594 default_sa.go:55] duration metric: took 5.008097ms for default service account to be created ...
	I0308 02:59:35.156300 1253594 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 02:59:35.166235 1253594 system_pods.go:86] 19 kube-system pods found
	I0308 02:59:35.166270 1253594 system_pods.go:89] "coredns-5dd5756b68-gfwfq" [e9e8987b-e511-4f9c-8eb9-92d73278f1a7] Running
	I0308 02:59:35.166283 1253594 system_pods.go:89] "csi-hostpath-attacher-0" [98bb39ee-61e2-47c8-9002-b9865e09e7ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0308 02:59:35.166294 1253594 system_pods.go:89] "csi-hostpath-resizer-0" [a0d8e923-1a02-49be-9da7-cf326b0e555a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0308 02:59:35.166311 1253594 system_pods.go:89] "csi-hostpathplugin-5f6b6" [e11b4aef-50fe-4d0c-ab0a-662cae2679ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0308 02:59:35.166321 1253594 system_pods.go:89] "etcd-addons-096357" [1f7777eb-f55e-49b0-9e0c-6cefc68eab78] Running
	I0308 02:59:35.166330 1253594 system_pods.go:89] "kindnet-2ssjr" [0c3f19c9-bb60-4e8f-9ad6-28624f7b09df] Running
	I0308 02:59:35.166342 1253594 system_pods.go:89] "kube-apiserver-addons-096357" [136bd4e5-d6f7-440c-8429-f5c922d56721] Running
	I0308 02:59:35.166351 1253594 system_pods.go:89] "kube-controller-manager-addons-096357" [2d4f8d7e-f23b-42be-a464-11d943e64069] Running
	I0308 02:59:35.166364 1253594 system_pods.go:89] "kube-ingress-dns-minikube" [178e20ad-93cf-4745-86ce-7befaf053f24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0308 02:59:35.166375 1253594 system_pods.go:89] "kube-proxy-9q92q" [c4dcca8b-90da-4ee6-bb23-f5e5d9e52672] Running
	I0308 02:59:35.166386 1253594 system_pods.go:89] "kube-scheduler-addons-096357" [bd71011b-c413-4c76-b15a-d8946d6fa08a] Running
	I0308 02:59:35.166393 1253594 system_pods.go:89] "metrics-server-69cf46c98-tg6kt" [df94e650-b701-42b9-9c86-8d5351621dcb] Running
	I0308 02:59:35.166403 1253594 system_pods.go:89] "nvidia-device-plugin-daemonset-5zvrf" [0c58fef2-eb9d-48b2-9e64-3481e5407cb2] Running
	I0308 02:59:35.166411 1253594 system_pods.go:89] "registry-6xbnd" [c865bced-8d68-4fe9-9b58-a387fa5d841b] Running
	I0308 02:59:35.166422 1253594 system_pods.go:89] "registry-proxy-b28lv" [53d1e743-7dde-45d5-8caa-7ac196b37d07] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0308 02:59:35.166432 1253594 system_pods.go:89] "snapshot-controller-58dbcc7b99-kkgrn" [0caaf65c-0b3f-4b9e-bbbd-0b9f36217a9e] Running
	I0308 02:59:35.166444 1253594 system_pods.go:89] "snapshot-controller-58dbcc7b99-x89gg" [af8add60-5aaf-4f9e-ad90-e3ba90083d94] Running
	I0308 02:59:35.166452 1253594 system_pods.go:89] "storage-provisioner" [34b1c6c0-cbf8-4e11-a72a-d0c4c2483cb1] Running
	I0308 02:59:35.166466 1253594 system_pods.go:89] "tiller-deploy-7b677967b9-c22n7" [f7d4183c-77c2-4528-b752-df447610d59d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0308 02:59:35.166480 1253594 system_pods.go:126] duration metric: took 10.171169ms to wait for k8s-apps to be running ...
	I0308 02:59:35.166494 1253594 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 02:59:35.166551 1253594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 02:59:35.179800 1253594 system_svc.go:56] duration metric: took 13.294028ms WaitForService to wait for kubelet
	I0308 02:59:35.179838 1253594 kubeadm.go:576] duration metric: took 38.826436526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 02:59:35.179864 1253594 node_conditions.go:102] verifying NodePressure condition ...
	I0308 02:59:35.237246 1253594 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0308 02:59:35.237279 1253594 node_conditions.go:123] node cpu capacity is 8
	I0308 02:59:35.237292 1253594 node_conditions.go:105] duration metric: took 57.42421ms to run NodePressure ...
	I0308 02:59:35.237305 1253594 start.go:240] waiting for startup goroutines ...
	I0308 02:59:35.239411 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:35.353008 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:35.355204 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:35.365702 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:35.689968 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:35.852842 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:35.854753 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:35.865523 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:36.189292 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:36.352668 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:36.354725 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:36.365007 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:36.689412 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:36.852665 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:36.854736 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:36.864661 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:37.190912 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:37.352992 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:37.355316 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:37.364373 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:37.689519 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:37.851955 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:37.853919 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:37.864106 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:38.189457 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:38.353074 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:38.354614 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:38.365340 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:38.689776 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:38.852551 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:38.854684 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:38.864796 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:39.189533 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:39.351806 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:39.355903 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:39.364850 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:39.690426 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:39.852920 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:39.855282 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:39.864809 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:40.239998 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:40.353687 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:40.356575 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:40.365347 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:40.690008 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:40.852521 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:40.854947 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:40.865915 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:41.190242 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:41.352376 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:41.354607 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:41.364449 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:41.690094 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:41.852145 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:41.855372 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:41.865252 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:42.189340 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:42.352698 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:42.355011 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:42.364041 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:42.689366 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:42.852382 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:42.854045 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:42.864948 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:43.189130 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:43.353077 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:43.354579 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:43.364625 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:43.690315 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:43.852050 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:43.855013 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:43.864483 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:44.240999 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:44.353206 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:44.355430 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:44.364972 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:44.690885 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:44.909043 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:44.909495 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:44.909648 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:45.188904 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:45.352732 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:45.354635 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:45.364863 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:45.690179 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:45.852906 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:45.855615 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:45.865113 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:46.189884 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:46.352893 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:46.356549 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:46.364917 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:46.690295 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:46.852466 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:46.854453 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:46.865321 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:47.190118 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:47.351849 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:47.355010 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:47.365297 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:47.689853 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:47.851931 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:47.854105 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:47.864503 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:48.189256 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:48.352746 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:48.354694 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:48.364959 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:48.688764 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:48.851996 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:48.854090 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:48.864450 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:49.195362 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:49.352772 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:49.355071 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:49.364234 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:49.689066 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:49.852198 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:49.854089 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:49.863829 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:50.189976 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:50.352343 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:50.354368 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:50.364450 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:50.689758 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:50.852728 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:50.854785 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:50.864873 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:51.190317 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:51.354243 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:51.355092 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:51.364772 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:51.690030 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:51.851908 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:51.854010 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:51.863778 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:52.189701 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:52.352355 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:52.354733 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:52.364774 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:52.689604 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:52.851905 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:52.854358 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:52.864245 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:53.237782 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:53.353818 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:53.356127 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:53.365108 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:53.739107 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:53.852304 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:53.855400 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:53.865092 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:54.189556 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:54.352079 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:54.355758 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:54.365562 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:54.689437 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:54.851539 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:54.854425 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:54.864688 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:55.189441 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:55.352803 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:55.355107 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:55.364131 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:55.688808 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:55.851879 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:55.854033 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:55.864382 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:56.189069 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:56.352308 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:56.354501 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:56.364478 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:56.689657 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:56.852852 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:56.855019 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:56.864015 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:57.190336 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:57.352587 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:57.354734 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:57.365185 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:57.690283 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:57.852666 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:57.859386 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:57.864831 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:58.189916 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:58.352619 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:58.354591 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:58.365395 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:58.690689 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:58.852917 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:58.854827 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:58.864933 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:59.190080 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:59.353352 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:59.355238 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:59.364506 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:59.689467 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:59.853787 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:59.854429 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:59.864503 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:00.189495 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:00.352847 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:00.354757 1253594 kapi.go:107] duration metric: took 56.004759939s to wait for kubernetes.io/minikube-addons=registry ...
	I0308 03:00:00.364969 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:00.690577 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:00.852351 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:00.864435 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:01.191287 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:01.352595 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:01.365374 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:01.689900 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:01.852723 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:01.865098 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:02.190492 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:02.353198 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:02.365348 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:02.689081 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:02.852144 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:02.864970 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:03.188711 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:03.352110 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:03.364703 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:03.689354 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:03.852127 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:03.864668 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:04.189808 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:04.351918 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:04.364598 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:04.689344 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:04.852317 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:04.864264 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:05.188918 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:05.351872 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:05.365135 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:05.689805 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:05.853429 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:05.866072 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:06.190211 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:06.352306 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:06.364864 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:06.738787 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:06.853454 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:06.936000 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:07.330358 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:07.486137 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:07.486419 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:07.741545 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:07.853177 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:07.937897 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:08.239533 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:08.353507 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:08.365261 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:08.739844 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:08.852363 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:08.865403 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:09.189524 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:09.352689 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:09.364787 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:09.689656 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:09.852089 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:09.865575 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:10.189407 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:10.352925 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:10.365549 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:10.689579 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:10.852709 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:10.865633 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:11.189640 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:11.352485 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:11.365548 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:11.689525 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:11.852894 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:11.865605 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:12.239871 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:12.352462 1253594 kapi.go:107] duration metric: took 1m8.006226631s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0308 03:00:12.365457 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:12.768814 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:12.865008 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:13.189846 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:13.365578 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:13.689560 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:13.864931 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:14.190272 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:14.365088 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:14.688823 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:14.865335 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:15.192585 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:15.365468 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:15.690307 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:15.864887 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:16.190432 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:16.366123 1253594 kapi.go:107] duration metric: took 1m9.504682364s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0308 03:00:16.368370 1253594 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-096357 cluster.
	I0308 03:00:16.369724 1253594 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0308 03:00:16.371016 1253594 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0308 03:00:16.689546 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:17.190002 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:17.691726 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:18.190488 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:18.689686 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:19.188882 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:19.689125 1253594 kapi.go:107] duration metric: took 1m14.505736463s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0308 03:00:19.690966 1253594 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, ingress-dns, inspektor-gadget, helm-tiller, yakd, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0308 03:00:19.692142 1253594 addons.go:505] duration metric: took 1m23.338716824s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner-rancher storage-provisioner ingress-dns inspektor-gadget helm-tiller yakd metrics-server default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0308 03:00:19.692179 1253594 start.go:245] waiting for cluster config update ...
	I0308 03:00:19.692199 1253594 start.go:254] writing updated cluster config ...
	I0308 03:00:19.692478 1253594 ssh_runner.go:195] Run: rm -f paused
	I0308 03:00:19.739146 1253594 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 03:00:19.741951 1253594 out.go:177] * Done! kubectl is now configured to use "addons-096357" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 08 03:03:06 addons-096357 crio[962]: time="2024-03-08 03:03:06.742352176Z" level=info msg="Removing container: 5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666" id=3dce0a3b-14a8-4bca-b4fe-7e651040109f name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 08 03:03:06 addons-096357 crio[962]: time="2024-03-08 03:03:06.756046092Z" level=info msg="Removed container 5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=3dce0a3b-14a8-4bca-b4fe-7e651040109f name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 08 03:03:08 addons-096357 crio[962]: time="2024-03-08 03:03:08.233089895Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=b73dbc44-c207-4bc8-93e3-e1b8282fa40f name=/runtime.v1.ImageService/PullImage
	Mar 08 03:03:08 addons-096357 crio[962]: time="2024-03-08 03:03:08.234097074Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=11e3aa89-b5da-44a3-870b-8f0564156a99 name=/runtime.v1.ImageService/ImageStatus
	Mar 08 03:03:08 addons-096357 crio[962]: time="2024-03-08 03:03:08.235079331Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=11e3aa89-b5da-44a3-870b-8f0564156a99 name=/runtime.v1.ImageService/ImageStatus
	Mar 08 03:03:08 addons-096357 crio[962]: time="2024-03-08 03:03:08.235912382Z" level=info msg="Creating container: default/hello-world-app-5d77478584-7rpmh/hello-world-app" id=d275e6db-b7e2-4a96-a3be-54c0e05f0a90 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 08 03:03:08 addons-096357 crio[962]: time="2024-03-08 03:03:08.236012256Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 08 03:03:08 addons-096357 crio[962]: time="2024-03-08 03:03:08.282986126Z" level=info msg="Stopping container: 05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6 (timeout: 2s)" id=f9281c3f-efc5-410a-8d4f-d511df2d54f6 name=/runtime.v1.RuntimeService/StopContainer
	Mar 08 03:03:08 addons-096357 crio[962]: time="2024-03-08 03:03:08.286187328Z" level=info msg="Created container f04ce88bf489524dc7bbc5c796a5d25ea0231ebb078ef65336b387de88b8b406: default/hello-world-app-5d77478584-7rpmh/hello-world-app" id=d275e6db-b7e2-4a96-a3be-54c0e05f0a90 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 08 03:03:08 addons-096357 crio[962]: time="2024-03-08 03:03:08.286722441Z" level=info msg="Starting container: f04ce88bf489524dc7bbc5c796a5d25ea0231ebb078ef65336b387de88b8b406" id=60a34006-0ec8-4c59-958a-0cf82a7607a3 name=/runtime.v1.RuntimeService/StartContainer
	Mar 08 03:03:08 addons-096357 crio[962]: time="2024-03-08 03:03:08.292559358Z" level=info msg="Started container" PID=10442 containerID=f04ce88bf489524dc7bbc5c796a5d25ea0231ebb078ef65336b387de88b8b406 description=default/hello-world-app-5d77478584-7rpmh/hello-world-app id=60a34006-0ec8-4c59-958a-0cf82a7607a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75239d422766d8c8bc5e4398881a6167805dde280bd21cee776560252b1a2760
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.289943265Z" level=warning msg="Stopping container 05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=f9281c3f-efc5-410a-8d4f-d511df2d54f6 name=/runtime.v1.RuntimeService/StopContainer
	Mar 08 03:03:10 addons-096357 conmon[5936]: conmon 05b0f45d1f5bdbe56c4d <ninfo>: container 5948 exited with status 137
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.421452850Z" level=info msg="Stopped container 05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6: ingress-nginx/ingress-nginx-controller-76dc478dd8-zsh28/controller" id=f9281c3f-efc5-410a-8d4f-d511df2d54f6 name=/runtime.v1.RuntimeService/StopContainer
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.422079096Z" level=info msg="Stopping pod sandbox: d004e293b57affc4245d3c57037a2fb1757b0f6b69713195215beef388d4461f" id=e52a060c-e228-45da-b66f-0b2e2221a810 name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.424922777Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-LFJ2YQSWM32S3ROQ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-NPV5SHYZ4VREV62C - [0:0]\n-X KUBE-HP-NPV5SHYZ4VREV62C\n-X KUBE-HP-LFJ2YQSWM32S3ROQ\nCOMMIT\n"
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.426272694Z" level=info msg="Closing host port tcp:80"
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.426313766Z" level=info msg="Closing host port tcp:443"
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.427623305Z" level=info msg="Host port tcp:80 does not have an open socket"
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.427638925Z" level=info msg="Host port tcp:443 does not have an open socket"
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.427769695Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-76dc478dd8-zsh28 Namespace:ingress-nginx ID:d004e293b57affc4245d3c57037a2fb1757b0f6b69713195215beef388d4461f UID:da2d5741-caea-41c5-ace3-c4d20e28c595 NetNS:/var/run/netns/353898aa-094f-4752-aa69-81cadf66fed0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.427889973Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-76dc478dd8-zsh28 from CNI network \"kindnet\" (type=ptp)"
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.466986277Z" level=info msg="Stopped pod sandbox: d004e293b57affc4245d3c57037a2fb1757b0f6b69713195215beef388d4461f" id=e52a060c-e228-45da-b66f-0b2e2221a810 name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.754295901Z" level=info msg="Removing container: 05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6" id=b1f7e0cc-325b-4342-8303-28cf94fbd664 name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 08 03:03:10 addons-096357 crio[962]: time="2024-03-08 03:03:10.766881169Z" level=info msg="Removed container 05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6: ingress-nginx/ingress-nginx-controller-76dc478dd8-zsh28/controller" id=b1f7e0cc-325b-4342-8303-28cf94fbd664 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f04ce88bf4895       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   75239d422766d       hello-world-app-5d77478584-7rpmh
	aa1150526de54       docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                              2 minutes ago       Running             nginx                     0                   d387c5c9bcab5       nginx
	df80420c66e1b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 2 minutes ago       Running             gcp-auth                  0                   8e13906c4e71b       gcp-auth-5f6b4f85fd-dg67t
	79ec8f6399a18       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                             3 minutes ago       Exited              patch                     1                   d652b751dcf38       ingress-nginx-admission-patch-fk5tc
	e53aedfb57474       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   dfeca1080201b       ingress-nginx-admission-create-sgbgl
	6cdd7dfe089d0       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   0162bf33e0422       yakd-dashboard-9947fc6bf-cfg2l
	467ed4f177ffa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   110c5c6bcb9c3       storage-provisioner
	d1c9c9a01709d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   730b774f37ea7       coredns-5dd5756b68-gfwfq
	b8d8e1a75a18a       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988                           4 minutes ago       Running             kindnet-cni               0                   abda6b79092c0       kindnet-2ssjr
	33835a3123b7d       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   0d9bcdcdf3e9d       kube-proxy-9q92q
	c3cd2cdf16085       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   9468058e62d41       kube-controller-manager-addons-096357
	b08667073714f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   e1b41a4f4f9e3       etcd-addons-096357
	5842eb5e64906       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   d64e3be0c28c9       kube-apiserver-addons-096357
	31d138d7bbf75       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   dde4a55ef60df       kube-scheduler-addons-096357
	
	
	==> coredns [d1c9c9a01709da56a298714930958b00f5f3151c5f4e702a5df9e18d695b48ae] <==
	[INFO] 10.244.0.14:43864 - 35827 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099645s
	[INFO] 10.244.0.14:38013 - 41511 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00355212s
	[INFO] 10.244.0.14:38013 - 14122 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004372298s
	[INFO] 10.244.0.14:33686 - 15745 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004237256s
	[INFO] 10.244.0.14:33686 - 40068 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005051536s
	[INFO] 10.244.0.14:34309 - 24039 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004049101s
	[INFO] 10.244.0.14:34309 - 2533 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004152533s
	[INFO] 10.244.0.14:56419 - 7099 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000061711s
	[INFO] 10.244.0.14:56419 - 29625 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087735s
	[INFO] 10.244.0.21:40050 - 31143 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000234412s
	[INFO] 10.244.0.21:38298 - 16160 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000309339s
	[INFO] 10.244.0.21:54498 - 53107 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131001s
	[INFO] 10.244.0.21:60564 - 3516 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164732s
	[INFO] 10.244.0.21:41263 - 14095 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134863s
	[INFO] 10.244.0.21:34588 - 414 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000676334s
	[INFO] 10.244.0.21:46411 - 9418 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007202391s
	[INFO] 10.244.0.21:55919 - 12170 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.009295506s
	[INFO] 10.244.0.21:35702 - 52201 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005944512s
	[INFO] 10.244.0.21:35436 - 4571 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00737081s
	[INFO] 10.244.0.21:44427 - 22182 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005052488s
	[INFO] 10.244.0.21:48647 - 49063 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00580834s
	[INFO] 10.244.0.21:35358 - 25288 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000701367s
	[INFO] 10.244.0.21:40245 - 42800 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000751794s
	[INFO] 10.244.0.24:46746 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000179215s
	[INFO] 10.244.0.24:55382 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000162405s
	
	
	==> describe nodes <==
	Name:               addons-096357
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-096357
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=addons-096357
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T02_58_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-096357
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 02:58:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-096357
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:03:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:01:16 +0000   Fri, 08 Mar 2024 02:58:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:01:16 +0000   Fri, 08 Mar 2024 02:58:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:01:16 +0000   Fri, 08 Mar 2024 02:58:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:01:16 +0000   Fri, 08 Mar 2024 02:59:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-096357
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 5d87022a679f418187d5ddef2a1c9837
	  System UUID:                f68cf2d4-66b4-4770-975b-c3f6179239f2
	  Boot ID:                    a24da1d7-0c05-43c1-a2f9-39bce5338f15
	  Kernel Version:             5.15.0-1053-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-7rpmh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-5f6b4f85fd-dg67t                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 coredns-5dd5756b68-gfwfq                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m19s
	  kube-system                 etcd-addons-096357                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m32s
	  kube-system                 kindnet-2ssjr                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m19s
	  kube-system                 kube-apiserver-addons-096357             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-controller-manager-addons-096357    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-proxy-9q92q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-scheduler-addons-096357             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-cfg2l           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m15s  kube-proxy       
	  Normal  Starting                 4m32s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m32s  kubelet          Node addons-096357 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s  kubelet          Node addons-096357 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s  kubelet          Node addons-096357 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m20s  node-controller  Node addons-096357 event: Registered Node addons-096357 in Controller
	  Normal  NodeReady                4m10s  kubelet          Node addons-096357 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 3d fc bc 20 95 08 06
	[  +0.072677] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 9e 0e c5 9a 10 08 06
	[ +13.018318] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 cb 46 a2 36 1f 08 06
	[  +0.000336] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9e 0e c5 9a 10 08 06
	[  +3.877840] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 90 f7 9c ef 79 08 06
	[  +0.000304] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 72 7e 58 7d cc 71 08 06
	[Mar 8 03:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 22 d3 17 d1 9e 62 59 69 b4 a6 72 08 00
	[  +1.003685] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 86 22 d3 17 d1 9e 62 59 69 b4 a6 72 08 00
	[  +2.015858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 22 d3 17 d1 9e 62 59 69 b4 a6 72 08 00
	[  +4.223558] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 86 22 d3 17 d1 9e 62 59 69 b4 a6 72 08 00
	[Mar 8 03:01] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 86 22 d3 17 d1 9e 62 59 69 b4 a6 72 08 00
	[ +16.126414] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 22 d3 17 d1 9e 62 59 69 b4 a6 72 08 00
	[ +33.276805] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 22 d3 17 d1 9e 62 59 69 b4 a6 72 08 00
	
	
	==> etcd [b08667073714f119924d5d083d68f7695bbc35278e39c94db60eac785faac00a] <==
	{"level":"info","ts":"2024-03-08T02:59:02.153189Z","caller":"traceutil/trace.go:171","msg":"trace[356450203] linearizableReadLoop","detail":"{readStateIndex:453; appliedIndex:450; }","duration":"100.595736ms","start":"2024-03-08T02:59:02.052576Z","end":"2024-03-08T02:59:02.153172Z","steps":["trace[356450203] 'read index received'  (duration: 91.847269ms)","trace[356450203] 'applied index is now lower than readState.Index'  (duration: 8.746971ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-08T02:59:02.153267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.41356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/local-path-provisioner-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T02:59:02.154018Z","caller":"traceutil/trace.go:171","msg":"trace[1435681777] range","detail":"{range_begin:/registry/clusterroles/local-path-provisioner-role; range_end:; response_count:0; response_revision:446; }","duration":"105.173252ms","start":"2024-03-08T02:59:02.04883Z","end":"2024-03-08T02:59:02.154003Z","steps":["trace[1435681777] 'agreement among raft nodes before linearized reading'  (duration: 104.388721ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.153295Z","caller":"traceutil/trace.go:171","msg":"trace[1719164322] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"104.527434ms","start":"2024-03-08T02:59:02.048761Z","end":"2024-03-08T02:59:02.153288Z","steps":["trace[1719164322] 'process raft request'  (duration: 103.294673ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.153405Z","caller":"traceutil/trace.go:171","msg":"trace[1413678497] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"104.626113ms","start":"2024-03-08T02:59:02.04877Z","end":"2024-03-08T02:59:02.153396Z","steps":["trace[1413678497] 'process raft request'  (duration: 103.333876ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:59:02.952296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.38559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/local-path-storage/local-path-config\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T02:59:02.955077Z","caller":"traceutil/trace.go:171","msg":"trace[294861755] range","detail":"{range_begin:/registry/configmaps/local-path-storage/local-path-config; range_end:; response_count:0; response_revision:519; }","duration":"105.178803ms","start":"2024-03-08T02:59:02.84988Z","end":"2024-03-08T02:59:02.955059Z","steps":["trace[294861755] 'agreement among raft nodes before linearized reading'  (duration: 98.058861ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:59:02.952515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.814934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5307"}
	{"level":"warn","ts":"2024-03-08T02:59:02.952447Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.75464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/ingress-nginx\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-03-08T02:59:02.95283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.13843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/local-path-storage/local-path-provisioner\" ","response":"range_response_count:1 size:3551"}
	{"level":"warn","ts":"2024-03-08T02:59:02.952781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.840944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T02:59:02.95557Z","caller":"traceutil/trace.go:171","msg":"trace[601603887] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io; range_end:; response_count:0; response_revision:519; }","duration":"105.625927ms","start":"2024-03-08T02:59:02.849929Z","end":"2024-03-08T02:59:02.955555Z","steps":["trace[601603887] 'agreement among raft nodes before linearized reading'  (duration: 98.002855ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.955801Z","caller":"traceutil/trace.go:171","msg":"trace[173580253] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:519; }","duration":"106.09433ms","start":"2024-03-08T02:59:02.849695Z","end":"2024-03-08T02:59:02.955789Z","steps":["trace[173580253] 'agreement among raft nodes before linearized reading'  (duration: 98.171513ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.955975Z","caller":"traceutil/trace.go:171","msg":"trace[329350123] range","detail":"{range_begin:/registry/namespaces/ingress-nginx; range_end:; response_count:0; response_revision:519; }","duration":"106.290247ms","start":"2024-03-08T02:59:02.849673Z","end":"2024-03-08T02:59:02.955964Z","steps":["trace[329350123] 'agreement among raft nodes before linearized reading'  (duration: 98.776639ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.956137Z","caller":"traceutil/trace.go:171","msg":"trace[167553663] range","detail":"{range_begin:/registry/deployments/local-path-storage/local-path-provisioner; range_end:; response_count:1; response_revision:519; }","duration":"106.445429ms","start":"2024-03-08T02:59:02.84968Z","end":"2024-03-08T02:59:02.956125Z","steps":["trace[167553663] 'agreement among raft nodes before linearized reading'  (duration: 98.227907ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:45.14087Z","caller":"traceutil/trace.go:171","msg":"trace[409699703] transaction","detail":"{read_only:false; response_revision:1023; number_of_response:1; }","duration":"218.251772ms","start":"2024-03-08T02:59:44.922576Z","end":"2024-03-08T02:59:45.140828Z","steps":["trace[409699703] 'process raft request'  (duration: 131.611285ms)","trace[409699703] 'compare'  (duration: 86.48519ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T03:00:07.250531Z","caller":"traceutil/trace.go:171","msg":"trace[939714007] transaction","detail":"{read_only:false; response_revision:1121; number_of_response:1; }","duration":"107.265604ms","start":"2024-03-08T03:00:07.143243Z","end":"2024-03-08T03:00:07.250508Z","steps":["trace[939714007] 'process raft request'  (duration: 107.13888ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:00:07.326252Z","caller":"traceutil/trace.go:171","msg":"trace[913018281] transaction","detail":"{read_only:false; response_revision:1122; number_of_response:1; }","duration":"175.103806ms","start":"2024-03-08T03:00:07.151118Z","end":"2024-03-08T03:00:07.326222Z","steps":["trace[913018281] 'process raft request'  (duration: 174.909796ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:00:07.483925Z","caller":"traceutil/trace.go:171","msg":"trace[834842879] linearizableReadLoop","detail":"{readStateIndex:1157; appliedIndex:1156; }","duration":"133.539853ms","start":"2024-03-08T03:00:07.350368Z","end":"2024-03-08T03:00:07.483908Z","steps":["trace[834842879] 'read index received'  (duration: 105.009507ms)","trace[834842879] 'applied index is now lower than readState.Index'  (duration: 28.52927ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-08T03:00:07.4841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.735268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14575"}
	{"level":"warn","ts":"2024-03-08T03:00:07.484098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.424788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11262"}
	{"level":"info","ts":"2024-03-08T03:00:07.484143Z","caller":"traceutil/trace.go:171","msg":"trace[2138155592] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1122; }","duration":"133.797907ms","start":"2024-03-08T03:00:07.350333Z","end":"2024-03-08T03:00:07.484131Z","steps":["trace[2138155592] 'agreement among raft nodes before linearized reading'  (duration: 133.668944ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:00:07.484149Z","caller":"traceutil/trace.go:171","msg":"trace[1059642713] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1122; }","duration":"119.490028ms","start":"2024-03-08T03:00:07.364649Z","end":"2024-03-08T03:00:07.484139Z","steps":["trace[1059642713] 'agreement among raft nodes before linearized reading'  (duration: 119.3684ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:00:12.764756Z","caller":"traceutil/trace.go:171","msg":"trace[1123586567] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"121.86086ms","start":"2024-03-08T03:00:12.642834Z","end":"2024-03-08T03:00:12.764695Z","steps":["trace[1123586567] 'process raft request'  (duration: 53.669189ms)","trace[1123586567] 'compare'  (duration: 68.03382ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T03:00:45.478347Z","caller":"traceutil/trace.go:171","msg":"trace[754578964] transaction","detail":"{read_only:false; response_revision:1440; number_of_response:1; }","duration":"116.912793ms","start":"2024-03-08T03:00:45.361414Z","end":"2024-03-08T03:00:45.478327Z","steps":["trace[754578964] 'process raft request'  (duration: 116.76973ms)"],"step_count":1}
	
	
	==> gcp-auth [df80420c66e1bbef5aaa5cc0f8e23f73fc7b4d1008af650e2aadc37c8de7809d] <==
	2024/03/08 03:00:15 GCP Auth Webhook started!
	2024/03/08 03:00:26 Ready to marshal response ...
	2024/03/08 03:00:26 Ready to write response ...
	2024/03/08 03:00:26 Ready to marshal response ...
	2024/03/08 03:00:26 Ready to write response ...
	2024/03/08 03:00:30 Ready to marshal response ...
	2024/03/08 03:00:30 Ready to write response ...
	2024/03/08 03:00:30 Ready to marshal response ...
	2024/03/08 03:00:30 Ready to write response ...
	2024/03/08 03:00:42 Ready to marshal response ...
	2024/03/08 03:00:42 Ready to write response ...
	2024/03/08 03:00:43 Ready to marshal response ...
	2024/03/08 03:00:43 Ready to write response ...
	2024/03/08 03:00:56 Ready to marshal response ...
	2024/03/08 03:00:56 Ready to write response ...
	2024/03/08 03:01:29 Ready to marshal response ...
	2024/03/08 03:01:29 Ready to write response ...
	2024/03/08 03:03:05 Ready to marshal response ...
	2024/03/08 03:03:05 Ready to write response ...
	
	
	==> kernel <==
	 03:03:15 up  5:45,  0 users,  load average: 0.41, 1.10, 1.74
	Linux addons-096357 5.15.0-1053-gcp #61~20.04.1-Ubuntu SMP Mon Feb 26 16:50:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [b8d8e1a75a18a953d4b83284c434b026b3dcfb7f58857a28d4f41e0d9c85aac6] <==
	I0308 03:01:15.240239       1 main.go:227] handling current node
	I0308 03:01:25.252726       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:01:25.252753       1 main.go:227] handling current node
	I0308 03:01:35.256319       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:01:35.256344       1 main.go:227] handling current node
	I0308 03:01:45.264666       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:01:45.264690       1 main.go:227] handling current node
	I0308 03:01:55.269068       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:01:55.269325       1 main.go:227] handling current node
	I0308 03:02:05.273359       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:02:05.273383       1 main.go:227] handling current node
	I0308 03:02:15.277899       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:02:15.277922       1 main.go:227] handling current node
	I0308 03:02:25.289800       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:02:25.289856       1 main.go:227] handling current node
	I0308 03:02:35.293943       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:02:35.293977       1 main.go:227] handling current node
	I0308 03:02:45.306016       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:02:45.306039       1 main.go:227] handling current node
	I0308 03:02:55.310358       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:02:55.310386       1 main.go:227] handling current node
	I0308 03:03:05.314755       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:03:05.314789       1 main.go:227] handling current node
	I0308 03:03:15.319220       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:03:15.319247       1 main.go:227] handling current node
	
	
	==> kube-apiserver [5842eb5e64906c22f322f9c7479008d71e3fec9ded85f5f20ebeb9580664fe31] <==
	I0308 03:00:47.258082       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0308 03:00:47.265216       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0308 03:00:48.275754       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0308 03:00:59.040199       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0308 03:01:10.120086       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0308 03:01:46.777365       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.777513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.784105       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.784169       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.790458       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.790509       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.791064       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.791174       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.800463       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.800518       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.801524       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.801553       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.810501       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.810552       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 03:01:46.811150       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 03:01:46.811169       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0308 03:01:47.791137       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0308 03:01:47.811230       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0308 03:01:47.845153       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0308 03:03:05.645240       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.216.155"}
	
	
	==> kube-controller-manager [c3cd2cdf16085464c0d0701ed3a9d5fd60c59c4595dadcf1a6f056ca4f60f030] <==
	W0308 03:02:21.755653       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:21.755684       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:02:27.855577       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:27.855613       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:02:30.497801       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:30.497840       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:02:56.340295       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:56.340329       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:02:57.428901       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:02:57.428941       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:03:04.415529       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:03:04.415566       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0308 03:03:05.487762       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0308 03:03:05.498902       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-7rpmh"
	I0308 03:03:05.503466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.985572ms"
	I0308 03:03:05.510863       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.339144ms"
	I0308 03:03:05.510954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.96µs"
	I0308 03:03:05.517012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="82.276µs"
	I0308 03:03:07.270658       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0308 03:03:07.271493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="6.937µs"
	I0308 03:03:07.274532       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0308 03:03:08.761727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.058228ms"
	I0308 03:03:08.761810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.102µs"
	W0308 03:03:09.589435       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:03:09.589475       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [33835a3123b7d8932a967695c9208851459006d8f1b2c40a47158a5ebb524058] <==
	I0308 02:58:57.936522       1 server_others.go:69] "Using iptables proxy"
	I0308 02:58:58.450099       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0308 02:58:59.647674       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0308 02:58:59.836808       1 server_others.go:152] "Using iptables Proxier"
	I0308 02:58:59.836953       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0308 02:58:59.837000       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0308 02:58:59.837052       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 02:58:59.837743       1 server.go:846] "Version info" version="v1.28.4"
	I0308 02:58:59.837838       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 02:58:59.840602       1 config.go:188] "Starting service config controller"
	I0308 02:58:59.840684       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 02:58:59.840748       1 config.go:97] "Starting endpoint slice config controller"
	I0308 02:58:59.840776       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 02:58:59.841458       1 config.go:315] "Starting node config controller"
	I0308 02:58:59.841518       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 02:58:59.944667       1 shared_informer.go:318] Caches are synced for node config
	I0308 02:58:59.945856       1 shared_informer.go:318] Caches are synced for service config
	I0308 02:58:59.945958       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [31d138d7bbf75db50d64c6819489eea9d6c30785439df075e4d229475fbc20c2] <==
	E0308 02:58:40.748026       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0308 02:58:40.743115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 02:58:40.748069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 02:58:40.748076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 02:58:40.748099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 02:58:40.748079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 02:58:40.748086       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 02:58:40.748185       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 02:58:40.748270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 02:58:40.748214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 02:58:40.748160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 02:58:40.748331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 02:58:40.748337       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 02:58:40.748342       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 02:58:40.748668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 02:58:40.748682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 02:58:41.568688       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 02:58:41.568717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 02:58:41.698364       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 02:58:41.698406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 02:58:41.731876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 02:58:41.731912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 02:58:41.745518       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 02:58:41.745540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0308 02:58:42.240296       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 03:03:05 addons-096357 kubelet[1665]: W0308 03:03:05.858830    1665 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4/crio-75239d422766d8c8bc5e4398881a6167805dde280bd21cee776560252b1a2760 WatchSource:0}: Error finding container 75239d422766d8c8bc5e4398881a6167805dde280bd21cee776560252b1a2760: Status 404 returned error can't find the container with id 75239d422766d8c8bc5e4398881a6167805dde280bd21cee776560252b1a2760
	Mar 08 03:03:06 addons-096357 kubelet[1665]: I0308 03:03:06.591408    1665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2jsp\" (UniqueName: \"kubernetes.io/projected/178e20ad-93cf-4745-86ce-7befaf053f24-kube-api-access-g2jsp\") pod \"178e20ad-93cf-4745-86ce-7befaf053f24\" (UID: \"178e20ad-93cf-4745-86ce-7befaf053f24\") "
	Mar 08 03:03:06 addons-096357 kubelet[1665]: I0308 03:03:06.593340    1665 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/178e20ad-93cf-4745-86ce-7befaf053f24-kube-api-access-g2jsp" (OuterVolumeSpecName: "kube-api-access-g2jsp") pod "178e20ad-93cf-4745-86ce-7befaf053f24" (UID: "178e20ad-93cf-4745-86ce-7befaf053f24"). InnerVolumeSpecName "kube-api-access-g2jsp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 08 03:03:06 addons-096357 kubelet[1665]: I0308 03:03:06.691717    1665 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g2jsp\" (UniqueName: \"kubernetes.io/projected/178e20ad-93cf-4745-86ce-7befaf053f24-kube-api-access-g2jsp\") on node \"addons-096357\" DevicePath \"\""
	Mar 08 03:03:06 addons-096357 kubelet[1665]: I0308 03:03:06.741336    1665 scope.go:117] "RemoveContainer" containerID="5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666"
	Mar 08 03:03:06 addons-096357 kubelet[1665]: I0308 03:03:06.756284    1665 scope.go:117] "RemoveContainer" containerID="5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666"
	Mar 08 03:03:06 addons-096357 kubelet[1665]: E0308 03:03:06.756649    1665 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666\": container with ID starting with 5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666 not found: ID does not exist" containerID="5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666"
	Mar 08 03:03:06 addons-096357 kubelet[1665]: I0308 03:03:06.756697    1665 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666"} err="failed to get container status \"5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666\": rpc error: code = NotFound desc = could not find container \"5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666\": container with ID starting with 5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666 not found: ID does not exist"
	Mar 08 03:03:07 addons-096357 kubelet[1665]: E0308 03:03:07.050525    1665 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c868395742345b8f7be8ce477439179f9ebe5e3282234baf6b399814ccf00af4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c868395742345b8f7be8ce477439179f9ebe5e3282234baf6b399814ccf00af4/diff: no such file or directory, extraDiskErr: <nil>
	Mar 08 03:03:07 addons-096357 kubelet[1665]: I0308 03:03:07.354898    1665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="178e20ad-93cf-4745-86ce-7befaf053f24" path="/var/lib/kubelet/pods/178e20ad-93cf-4745-86ce-7befaf053f24/volumes"
	Mar 08 03:03:07 addons-096357 kubelet[1665]: I0308 03:03:07.355463    1665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2514e680-87ff-4f3e-8dbc-dbdc376d93a5" path="/var/lib/kubelet/pods/2514e680-87ff-4f3e-8dbc-dbdc376d93a5/volumes"
	Mar 08 03:03:07 addons-096357 kubelet[1665]: I0308 03:03:07.355913    1665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3f19fa64-9c85-4716-9d12-1efabfce703d" path="/var/lib/kubelet/pods/3f19fa64-9c85-4716-9d12-1efabfce703d/volumes"
	Mar 08 03:03:08 addons-096357 kubelet[1665]: E0308 03:03:08.246166    1665 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7114399e49298fa369d84c7570f096d7ac0a9ee020a88516728446274c625905/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7114399e49298fa369d84c7570f096d7ac0a9ee020a88516728446274c625905/diff: no such file or directory, extraDiskErr: <nil>
	Mar 08 03:03:08 addons-096357 kubelet[1665]: I0308 03:03:08.756863    1665 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-7rpmh" podStartSLOduration=1.384851996 podCreationTimestamp="2024-03-08 03:03:05 +0000 UTC" firstStartedPulling="2024-03-08 03:03:05.861487807 +0000 UTC m=+262.608654245" lastFinishedPulling="2024-03-08 03:03:08.233438813 +0000 UTC m=+264.980605253" observedRunningTime="2024-03-08 03:03:08.756407494 +0000 UTC m=+265.503573936" watchObservedRunningTime="2024-03-08 03:03:08.756803004 +0000 UTC m=+265.503969445"
	Mar 08 03:03:10 addons-096357 kubelet[1665]: I0308 03:03:10.654758    1665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8xd6\" (UniqueName: \"kubernetes.io/projected/da2d5741-caea-41c5-ace3-c4d20e28c595-kube-api-access-q8xd6\") pod \"da2d5741-caea-41c5-ace3-c4d20e28c595\" (UID: \"da2d5741-caea-41c5-ace3-c4d20e28c595\") "
	Mar 08 03:03:10 addons-096357 kubelet[1665]: I0308 03:03:10.654819    1665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da2d5741-caea-41c5-ace3-c4d20e28c595-webhook-cert\") pod \"da2d5741-caea-41c5-ace3-c4d20e28c595\" (UID: \"da2d5741-caea-41c5-ace3-c4d20e28c595\") "
	Mar 08 03:03:10 addons-096357 kubelet[1665]: I0308 03:03:10.656813    1665 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2d5741-caea-41c5-ace3-c4d20e28c595-kube-api-access-q8xd6" (OuterVolumeSpecName: "kube-api-access-q8xd6") pod "da2d5741-caea-41c5-ace3-c4d20e28c595" (UID: "da2d5741-caea-41c5-ace3-c4d20e28c595"). InnerVolumeSpecName "kube-api-access-q8xd6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 08 03:03:10 addons-096357 kubelet[1665]: I0308 03:03:10.656871    1665 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2d5741-caea-41c5-ace3-c4d20e28c595-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "da2d5741-caea-41c5-ace3-c4d20e28c595" (UID: "da2d5741-caea-41c5-ace3-c4d20e28c595"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 08 03:03:10 addons-096357 kubelet[1665]: I0308 03:03:10.753262    1665 scope.go:117] "RemoveContainer" containerID="05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6"
	Mar 08 03:03:10 addons-096357 kubelet[1665]: I0308 03:03:10.755580    1665 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da2d5741-caea-41c5-ace3-c4d20e28c595-webhook-cert\") on node \"addons-096357\" DevicePath \"\""
	Mar 08 03:03:10 addons-096357 kubelet[1665]: I0308 03:03:10.755610    1665 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q8xd6\" (UniqueName: \"kubernetes.io/projected/da2d5741-caea-41c5-ace3-c4d20e28c595-kube-api-access-q8xd6\") on node \"addons-096357\" DevicePath \"\""
	Mar 08 03:03:10 addons-096357 kubelet[1665]: I0308 03:03:10.767104    1665 scope.go:117] "RemoveContainer" containerID="05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6"
	Mar 08 03:03:10 addons-096357 kubelet[1665]: E0308 03:03:10.767399    1665 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6\": container with ID starting with 05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6 not found: ID does not exist" containerID="05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6"
	Mar 08 03:03:10 addons-096357 kubelet[1665]: I0308 03:03:10.767446    1665 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6"} err="failed to get container status \"05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6\": rpc error: code = NotFound desc = could not find container \"05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6\": container with ID starting with 05b0f45d1f5bdbe56c4d83a46ddf5022184a13c20e82be7a8b694e9912ee34f6 not found: ID does not exist"
	Mar 08 03:03:11 addons-096357 kubelet[1665]: I0308 03:03:11.354182    1665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="da2d5741-caea-41c5-ace3-c4d20e28c595" path="/var/lib/kubelet/pods/da2d5741-caea-41c5-ace3-c4d20e28c595/volumes"
	
	
	==> storage-provisioner [467ed4f177ffab0a3446f269e31d4c97c1c8454b50e839fedfa72072239fc771] <==
	I0308 02:59:08.236763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 02:59:08.247490       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 02:59:08.247644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 02:59:08.257412       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 02:59:08.257600       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-096357_31364743-c6e2-4087-b1ed-9f4cecf34757!
	I0308 02:59:08.258252       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d333fa28-702d-4e42-b931-a1eef4b74a5a", APIVersion:"v1", ResourceVersion:"872", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-096357_31364743-c6e2-4087-b1ed-9f4cecf34757 became leader
	I0308 02:59:08.358062       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-096357_31364743-c6e2-4087-b1ed-9f4cecf34757!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-096357 -n addons-096357
helpers_test.go:261: (dbg) Run:  kubectl --context addons-096357 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.97s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-096357 --alsologtostderr -v=1
addons_test.go:824: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-096357 --alsologtostderr -v=1: exit status 11 (292.076765ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:00:39.117911 1263344 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:00:39.118179 1263344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:00:39.118190 1263344 out.go:304] Setting ErrFile to fd 2...
	I0308 03:00:39.118196 1263344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:00:39.118443 1263344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 03:00:39.118728 1263344 mustload.go:65] Loading cluster: addons-096357
	I0308 03:00:39.119092 1263344 config.go:182] Loaded profile config "addons-096357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:00:39.119122 1263344 addons.go:597] checking whether the cluster is paused
	I0308 03:00:39.119222 1263344 config.go:182] Loaded profile config "addons-096357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:00:39.119239 1263344 host.go:66] Checking if "addons-096357" exists ...
	I0308 03:00:39.119648 1263344 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 03:00:39.135546 1263344 ssh_runner.go:195] Run: systemctl --version
	I0308 03:00:39.135601 1263344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 03:00:39.160128 1263344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 03:00:39.245933 1263344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 03:00:39.246024 1263344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 03:00:39.282532 1263344 cri.go:89] found id: "3f0a679acb6a4d1399856ec38e06de7f1d3e3cb1d1d80ee52f30e10ed14a19c3"
	I0308 03:00:39.282574 1263344 cri.go:89] found id: "c632f0ca28a170880500581ce8f57e6432e33ab899fa50f8d3533cfce97d1eae"
	I0308 03:00:39.282581 1263344 cri.go:89] found id: "779e547f540c3b99ddb2f5b6875564872d04aac18a0cbc98801b8a4de0de3820"
	I0308 03:00:39.282592 1263344 cri.go:89] found id: "375033efa75a9615a7490c07d47d66d8a42c8e56dcc52f263350dce6419709eb"
	I0308 03:00:39.282596 1263344 cri.go:89] found id: "e158a0d6c92fd2c84445e7b164b898d79793b071cc6bbae663ec4638aae01da4"
	I0308 03:00:39.282601 1263344 cri.go:89] found id: "a4e5132c4096e1fe1926d030ee75952e75c446c1c0face714f82569c0fca1ab1"
	I0308 03:00:39.282607 1263344 cri.go:89] found id: "f4f631e73a3b311942a6a857f41a082309d03454b50cb1d444698b5b862a932b"
	I0308 03:00:39.282611 1263344 cri.go:89] found id: "5688e98a0d1fd299c0afa88fc0e810dae3435e97b3bbbd26cfc83cde4f7161bc"
	I0308 03:00:39.282615 1263344 cri.go:89] found id: "7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7"
	I0308 03:00:39.282630 1263344 cri.go:89] found id: "5b638f3101a876d73130eeb1fe2aecd6aea61b82763e1be7aa53aeb159ac9666"
	I0308 03:00:39.282638 1263344 cri.go:89] found id: "3b00101f24507649f7780c5b7a064c1d47cdcc534523e1bde1fccb62be8dac99"
	I0308 03:00:39.282642 1263344 cri.go:89] found id: "017a6adea13b1d7e442d1470904ea9c8ce5311131e2f122199d719b139d7ab31"
	I0308 03:00:39.282652 1263344 cri.go:89] found id: "e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d"
	I0308 03:00:39.282659 1263344 cri.go:89] found id: "e10fb1994ad0a16791b66ecc6232ff781f9b987305d0c020d9ac9776a280dae0"
	I0308 03:00:39.282667 1263344 cri.go:89] found id: "467ed4f177ffab0a3446f269e31d4c97c1c8454b50e839fedfa72072239fc771"
	I0308 03:00:39.282674 1263344 cri.go:89] found id: "d1c9c9a01709da56a298714930958b00f5f3151c5f4e702a5df9e18d695b48ae"
	I0308 03:00:39.282678 1263344 cri.go:89] found id: "b8d8e1a75a18a953d4b83284c434b026b3dcfb7f58857a28d4f41e0d9c85aac6"
	I0308 03:00:39.282684 1263344 cri.go:89] found id: "33835a3123b7d8932a967695c9208851459006d8f1b2c40a47158a5ebb524058"
	I0308 03:00:39.282688 1263344 cri.go:89] found id: "c3cd2cdf16085464c0d0701ed3a9d5fd60c59c4595dadcf1a6f056ca4f60f030"
	I0308 03:00:39.282692 1263344 cri.go:89] found id: "b08667073714f119924d5d083d68f7695bbc35278e39c94db60eac785faac00a"
	I0308 03:00:39.282697 1263344 cri.go:89] found id: "5842eb5e64906c22f322f9c7479008d71e3fec9ded85f5f20ebeb9580664fe31"
	I0308 03:00:39.282701 1263344 cri.go:89] found id: "31d138d7bbf75db50d64c6819489eea9d6c30785439df075e4d229475fbc20c2"
	I0308 03:00:39.282709 1263344 cri.go:89] found id: ""
	I0308 03:00:39.282766 1263344 ssh_runner.go:195] Run: sudo runc list -f json
	I0308 03:00:39.338618 1263344 out.go:177] 
	W0308 03:00:39.340399 1263344 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-08T03:00:39Z" level=error msg="stat /run/runc/e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-08T03:00:39Z" level=error msg="stat /run/runc/e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d: no such file or directory"
	
	W0308 03:00:39.340424 1263344 out.go:239] * 
	* 
	W0308 03:00:39.344564 1263344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 03:00:39.346179 1263344 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:826: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-096357 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-096357
helpers_test.go:235: (dbg) docker inspect addons-096357:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4",
	        "Created": "2024-03-08T02:58:29.99417869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1254261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-08T02:58:30.263144855Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4/hosts",
	        "LogPath": "/var/lib/docker/containers/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4/d3046b6650113a41bbf53082a016bfacc133aed219e8c9206e1597f7f1007fd4-json.log",
	        "Name": "/addons-096357",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-096357:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-096357",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7e3d5895b991119050677bd5e655d981d7e2255ee5455b18b56c007ab493d742-init/diff:/var/lib/docker/overlay2/3c39ae14a1c3dc02177b83b99337c99805ac4a7cbb72dee66bd275c2d8550aff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e3d5895b991119050677bd5e655d981d7e2255ee5455b18b56c007ab493d742/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e3d5895b991119050677bd5e655d981d7e2255ee5455b18b56c007ab493d742/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e3d5895b991119050677bd5e655d981d7e2255ee5455b18b56c007ab493d742/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-096357",
	                "Source": "/var/lib/docker/volumes/addons-096357/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-096357",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-096357",
	                "name.minikube.sigs.k8s.io": "addons-096357",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a25790a701e11d33e37be29981229ab5a2f49309e200510e7b9444a77f79d84",
	            "SandboxKey": "/var/run/docker/netns/3a25790a701e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-096357": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d3046b665011",
	                        "addons-096357"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "edf7742f4235aff6e15e4039e9ba6ec6a24553437f3dfa636e1881254885f5b6",
	                    "EndpointID": "0856e47015e5551eae2f472204eabba205854e94e23839560fbd8c401942c424",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-096357",
	                        "d3046b665011"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-096357 -n addons-096357
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-096357 logs -n 25: (1.506639341s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-728790   | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC |                     |
	|         | -p download-only-728790              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC | 08 Mar 24 02:57 UTC |
	| delete  | -p download-only-728790              | download-only-728790   | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC | 08 Mar 24 02:57 UTC |
	| start   | -o=json --download-only              | download-only-338197   | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC |                     |
	|         | -p download-only-338197              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC | 08 Mar 24 02:57 UTC |
	| delete  | -p download-only-338197              | download-only-338197   | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC | 08 Mar 24 02:57 UTC |
	| start   | -o=json --download-only              | download-only-564762   | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC |                     |
	|         | -p download-only-564762              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| delete  | -p download-only-564762              | download-only-564762   | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| delete  | -p download-only-728790              | download-only-728790   | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| delete  | -p download-only-338197              | download-only-338197   | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| delete  | -p download-only-564762              | download-only-564762   | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| start   | --download-only -p                   | download-docker-688312 | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC |                     |
	|         | download-docker-688312               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-688312            | download-docker-688312 | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| start   | --download-only -p                   | binary-mirror-887601   | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC |                     |
	|         | binary-mirror-887601                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34547               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-887601              | binary-mirror-887601   | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| addons  | disable dashboard -p                 | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC |                     |
	|         | addons-096357                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC |                     |
	|         | addons-096357                        |                        |         |         |                     |                     |
	| start   | -p addons-096357 --wait=true         | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 03:00 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         |  --container-runtime=crio            |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-096357 addons                 | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | -p addons-096357                     |                        |         |         |                     |                     |
	| addons  | addons-096357 addons disable         | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-096357 ip                     | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	| addons  | addons-096357 addons disable         | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC | 08 Mar 24 03:00 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-096357          | jenkins | v1.32.0 | 08 Mar 24 03:00 UTC |                     |
	|         | -p addons-096357                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 02:58:07
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 02:58:07.848040 1253594 out.go:291] Setting OutFile to fd 1 ...
	I0308 02:58:07.848475 1253594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:58:07.848491 1253594 out.go:304] Setting ErrFile to fd 2...
	I0308 02:58:07.848499 1253594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:58:07.848976 1253594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 02:58:07.850219 1253594 out.go:298] Setting JSON to false
	I0308 02:58:07.851163 1253594 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":20434,"bootTime":1709846254,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 02:58:07.851230 1253594 start.go:139] virtualization: kvm guest
	I0308 02:58:07.853033 1253594 out.go:177] * [addons-096357] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 02:58:07.854711 1253594 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 02:58:07.854713 1253594 notify.go:220] Checking for updates...
	I0308 02:58:07.855965 1253594 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 02:58:07.857229 1253594 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	I0308 02:58:07.858471 1253594 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	I0308 02:58:07.859621 1253594 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 02:58:07.860801 1253594 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 02:58:07.862223 1253594 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 02:58:07.884570 1253594 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0308 02:58:07.884700 1253594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 02:58:07.932262 1253594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-03-08 02:58:07.923627422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 02:58:07.932375 1253594 docker.go:295] overlay module found
	I0308 02:58:07.934109 1253594 out.go:177] * Using the docker driver based on user configuration
	I0308 02:58:07.935296 1253594 start.go:297] selected driver: docker
	I0308 02:58:07.935308 1253594 start.go:901] validating driver "docker" against <nil>
	I0308 02:58:07.935320 1253594 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 02:58:07.936103 1253594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 02:58:07.984671 1253594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-03-08 02:58:07.975737838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 02:58:07.984906 1253594 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 02:58:07.985200 1253594 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 02:58:07.986860 1253594 out.go:177] * Using Docker driver with root privileges
	I0308 02:58:07.988352 1253594 cni.go:84] Creating CNI manager for ""
	I0308 02:58:07.988370 1253594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0308 02:58:07.988380 1253594 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0308 02:58:07.988443 1253594 start.go:340] cluster config:
	{Name:addons-096357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-096357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 02:58:07.989782 1253594 out.go:177] * Starting "addons-096357" primary control-plane node in "addons-096357" cluster
	I0308 02:58:07.991042 1253594 cache.go:121] Beginning downloading kic base image for docker with crio
	I0308 02:58:07.992305 1253594 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0308 02:58:07.993574 1253594 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 02:58:07.993641 1253594 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0308 02:58:07.993663 1253594 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 02:58:07.993675 1253594 cache.go:56] Caching tarball of preloaded images
	I0308 02:58:07.993813 1253594 preload.go:173] Found /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 02:58:07.993831 1253594 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 02:58:07.994177 1253594 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/config.json ...
	I0308 02:58:07.994204 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/config.json: {Name:mke782156128fe9cc35a3f03c9f28dfea004e045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:08.008624 1253594 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0308 02:58:08.008746 1253594 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0308 02:58:08.008761 1253594 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0308 02:58:08.008765 1253594 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0308 02:58:08.008773 1253594 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0308 02:58:08.008780 1253594 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0308 02:58:19.531438 1253594 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0308 02:58:19.531487 1253594 cache.go:194] Successfully downloaded all kic artifacts
	I0308 02:58:19.531522 1253594 start.go:360] acquireMachinesLock for addons-096357: {Name:mk08648cdaca399025e8f1d58c6c633983097f69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 02:58:19.531617 1253594 start.go:364] duration metric: took 74.055µs to acquireMachinesLock for "addons-096357"
	I0308 02:58:19.531641 1253594 start.go:93] Provisioning new machine with config: &{Name:addons-096357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-096357 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 02:58:19.531716 1253594 start.go:125] createHost starting for "" (driver="docker")
	I0308 02:58:19.533434 1253594 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0308 02:58:19.533704 1253594 start.go:159] libmachine.API.Create for "addons-096357" (driver="docker")
	I0308 02:58:19.533740 1253594 client.go:168] LocalClient.Create starting
	I0308 02:58:19.533835 1253594 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem
	I0308 02:58:19.817295 1253594 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/cert.pem
	I0308 02:58:19.882819 1253594 cli_runner.go:164] Run: docker network inspect addons-096357 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0308 02:58:19.898740 1253594 cli_runner.go:211] docker network inspect addons-096357 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0308 02:58:19.898824 1253594 network_create.go:281] running [docker network inspect addons-096357] to gather additional debugging logs...
	I0308 02:58:19.898845 1253594 cli_runner.go:164] Run: docker network inspect addons-096357
	W0308 02:58:19.913565 1253594 cli_runner.go:211] docker network inspect addons-096357 returned with exit code 1
	I0308 02:58:19.913620 1253594 network_create.go:284] error running [docker network inspect addons-096357]: docker network inspect addons-096357: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-096357 not found
	I0308 02:58:19.913635 1253594 network_create.go:286] output of [docker network inspect addons-096357]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-096357 not found
	
	** /stderr **
	I0308 02:58:19.913720 1253594 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0308 02:58:19.929171 1253594 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002737420}
	I0308 02:58:19.929226 1253594 network_create.go:124] attempt to create docker network addons-096357 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0308 02:58:19.929273 1253594 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-096357 addons-096357
	I0308 02:58:19.982091 1253594 network_create.go:108] docker network addons-096357 192.168.49.0/24 created
	I0308 02:58:19.982124 1253594 kic.go:121] calculated static IP "192.168.49.2" for the "addons-096357" container
	I0308 02:58:19.982194 1253594 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0308 02:58:19.996917 1253594 cli_runner.go:164] Run: docker volume create addons-096357 --label name.minikube.sigs.k8s.io=addons-096357 --label created_by.minikube.sigs.k8s.io=true
	I0308 02:58:20.013747 1253594 oci.go:103] Successfully created a docker volume addons-096357
	I0308 02:58:20.013839 1253594 cli_runner.go:164] Run: docker run --rm --name addons-096357-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-096357 --entrypoint /usr/bin/test -v addons-096357:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0308 02:58:24.838270 1253594 cli_runner.go:217] Completed: docker run --rm --name addons-096357-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-096357 --entrypoint /usr/bin/test -v addons-096357:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (4.824381579s)
	I0308 02:58:24.838314 1253594 oci.go:107] Successfully prepared a docker volume addons-096357
	I0308 02:58:24.838344 1253594 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 02:58:24.838369 1253594 kic.go:194] Starting extracting preloaded images to volume ...
	I0308 02:58:24.838435 1253594 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-096357:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0308 02:58:29.932305 1253594 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-096357:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (5.093827169s)
	I0308 02:58:29.932341 1253594 kic.go:203] duration metric: took 5.093967368s to extract preloaded images to volume ...
	W0308 02:58:29.932497 1253594 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0308 02:58:29.932686 1253594 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0308 02:58:29.979931 1253594 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-096357 --name addons-096357 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-096357 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-096357 --network addons-096357 --ip 192.168.49.2 --volume addons-096357:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0308 02:58:30.270716 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Running}}
	I0308 02:58:30.286272 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:30.303244 1253594 cli_runner.go:164] Run: docker exec addons-096357 stat /var/lib/dpkg/alternatives/iptables
	I0308 02:58:30.341325 1253594 oci.go:144] the created container "addons-096357" has a running status.
	I0308 02:58:30.341365 1253594 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa...
	I0308 02:58:30.579441 1253594 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0308 02:58:30.602870 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:30.617972 1253594 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0308 02:58:30.617997 1253594 kic_runner.go:114] Args: [docker exec --privileged addons-096357 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0308 02:58:30.663812 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:30.683727 1253594 machine.go:94] provisionDockerMachine start ...
	I0308 02:58:30.683860 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:30.699532 1253594 main.go:141] libmachine: Using SSH client type: native
	I0308 02:58:30.699745 1253594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0308 02:58:30.699758 1253594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 02:58:30.892825 1253594 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-096357
	
	I0308 02:58:30.892875 1253594 ubuntu.go:169] provisioning hostname "addons-096357"
	I0308 02:58:30.892934 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:30.909427 1253594 main.go:141] libmachine: Using SSH client type: native
	I0308 02:58:30.909635 1253594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0308 02:58:30.909656 1253594 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-096357 && echo "addons-096357" | sudo tee /etc/hostname
	I0308 02:58:31.031564 1253594 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-096357
	
	I0308 02:58:31.031655 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:31.047824 1253594 main.go:141] libmachine: Using SSH client type: native
	I0308 02:58:31.048016 1253594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0308 02:58:31.048032 1253594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-096357' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-096357/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-096357' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 02:58:31.161290 1253594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 02:58:31.161322 1253594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18333-1245188/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-1245188/.minikube}
	I0308 02:58:31.161349 1253594 ubuntu.go:177] setting up certificates
	I0308 02:58:31.161364 1253594 provision.go:84] configureAuth start
	I0308 02:58:31.161429 1253594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-096357
	I0308 02:58:31.176872 1253594 provision.go:143] copyHostCerts
	I0308 02:58:31.176945 1253594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.pem (1082 bytes)
	I0308 02:58:31.177055 1253594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-1245188/.minikube/cert.pem (1123 bytes)
	I0308 02:58:31.177105 1253594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-1245188/.minikube/key.pem (1679 bytes)
	I0308 02:58:31.177150 1253594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca-key.pem org=jenkins.addons-096357 san=[127.0.0.1 192.168.49.2 addons-096357 localhost minikube]
	I0308 02:58:31.383138 1253594 provision.go:177] copyRemoteCerts
	I0308 02:58:31.383203 1253594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 02:58:31.383238 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:31.399533 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:31.486393 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 02:58:31.508751 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 02:58:31.529939 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 02:58:31.550315 1253594 provision.go:87] duration metric: took 388.932588ms to configureAuth
	I0308 02:58:31.550344 1253594 ubuntu.go:193] setting minikube options for container-runtime
	I0308 02:58:31.550531 1253594 config.go:182] Loaded profile config "addons-096357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 02:58:31.550683 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:31.566020 1253594 main.go:141] libmachine: Using SSH client type: native
	I0308 02:58:31.566194 1253594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0308 02:58:31.566223 1253594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 02:58:31.765155 1253594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 02:58:31.765190 1253594 machine.go:97] duration metric: took 1.081433089s to provisionDockerMachine
	I0308 02:58:31.765200 1253594 client.go:171] duration metric: took 12.231452342s to LocalClient.Create
	I0308 02:58:31.765217 1253594 start.go:167] duration metric: took 12.231516885s to libmachine.API.Create "addons-096357"
	I0308 02:58:31.765224 1253594 start.go:293] postStartSetup for "addons-096357" (driver="docker")
	I0308 02:58:31.765234 1253594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 02:58:31.765295 1253594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 02:58:31.765326 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:31.781486 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:31.865864 1253594 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 02:58:31.868714 1253594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0308 02:58:31.868742 1253594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0308 02:58:31.868749 1253594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0308 02:58:31.868756 1253594 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0308 02:58:31.868768 1253594 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-1245188/.minikube/addons for local assets ...
	I0308 02:58:31.868820 1253594 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-1245188/.minikube/files for local assets ...
	I0308 02:58:31.868847 1253594 start.go:296] duration metric: took 103.617168ms for postStartSetup
	I0308 02:58:31.869090 1253594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-096357
	I0308 02:58:31.884969 1253594 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/config.json ...
	I0308 02:58:31.885240 1253594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 02:58:31.885298 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:31.901053 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:31.982312 1253594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0308 02:58:31.986472 1253594 start.go:128] duration metric: took 12.454740412s to createHost
	I0308 02:58:31.986501 1253594 start.go:83] releasing machines lock for "addons-096357", held for 12.454872218s
	I0308 02:58:31.986561 1253594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-096357
	I0308 02:58:32.001898 1253594 ssh_runner.go:195] Run: cat /version.json
	I0308 02:58:32.001938 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:32.002019 1253594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 02:58:32.002096 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:32.017726 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:32.018677 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:32.096906 1253594 ssh_runner.go:195] Run: systemctl --version
	I0308 02:58:32.165511 1253594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 02:58:32.303254 1253594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 02:58:32.307603 1253594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 02:58:32.325885 1253594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0308 02:58:32.325966 1253594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 02:58:32.352656 1253594 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0308 02:58:32.352682 1253594 start.go:494] detecting cgroup driver to use...
	I0308 02:58:32.352719 1253594 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0308 02:58:32.352770 1253594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 02:58:32.366492 1253594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 02:58:32.376307 1253594 docker.go:217] disabling cri-docker service (if available) ...
	I0308 02:58:32.376374 1253594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 02:58:32.388296 1253594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 02:58:32.400492 1253594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 02:58:32.473656 1253594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 02:58:32.549609 1253594 docker.go:233] disabling docker service ...
	I0308 02:58:32.549676 1253594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 02:58:32.568231 1253594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 02:58:32.578916 1253594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 02:58:32.649628 1253594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 02:58:32.725504 1253594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 02:58:32.735403 1253594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 02:58:32.749227 1253594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 02:58:32.749277 1253594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:58:32.757924 1253594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 02:58:32.757995 1253594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:58:32.766615 1253594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:58:32.775257 1253594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:58:32.783821 1253594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 02:58:32.791830 1253594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 02:58:32.799220 1253594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 02:58:32.806506 1253594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 02:58:32.884645 1253594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 02:58:32.994138 1253594 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 02:58:32.994234 1253594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 02:58:32.997562 1253594 start.go:562] Will wait 60s for crictl version
	I0308 02:58:32.997619 1253594 ssh_runner.go:195] Run: which crictl
	I0308 02:58:33.000631 1253594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 02:58:33.033339 1253594 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0308 02:58:33.033412 1253594 ssh_runner.go:195] Run: crio --version
	I0308 02:58:33.067476 1253594 ssh_runner.go:195] Run: crio --version
	I0308 02:58:33.100850 1253594 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0308 02:58:33.102328 1253594 cli_runner.go:164] Run: docker network inspect addons-096357 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0308 02:58:33.117786 1253594 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0308 02:58:33.121438 1253594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 02:58:33.131414 1253594 kubeadm.go:877] updating cluster {Name:addons-096357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-096357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 02:58:33.131568 1253594 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 02:58:33.131634 1253594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 02:58:33.186896 1253594 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 02:58:33.186920 1253594 crio.go:415] Images already preloaded, skipping extraction
	I0308 02:58:33.186962 1253594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 02:58:33.220288 1253594 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 02:58:33.220313 1253594 cache_images.go:84] Images are preloaded, skipping loading
	I0308 02:58:33.220321 1253594 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 crio true true} ...
	I0308 02:58:33.220424 1253594 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-096357 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-096357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 02:58:33.220507 1253594 ssh_runner.go:195] Run: crio config
	I0308 02:58:33.261219 1253594 cni.go:84] Creating CNI manager for ""
	I0308 02:58:33.261244 1253594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0308 02:58:33.261259 1253594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 02:58:33.261283 1253594 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-096357 NodeName:addons-096357 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 02:58:33.261444 1253594 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-096357"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 02:58:33.261509 1253594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 02:58:33.269735 1253594 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 02:58:33.269793 1253594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 02:58:33.277477 1253594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0308 02:58:33.292814 1253594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 02:58:33.308089 1253594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0308 02:58:33.323226 1253594 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0308 02:58:33.326350 1253594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 02:58:33.336647 1253594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 02:58:33.407007 1253594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 02:58:33.418978 1253594 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357 for IP: 192.168.49.2
	I0308 02:58:33.419008 1253594 certs.go:194] generating shared ca certs ...
	I0308 02:58:33.419032 1253594 certs.go:226] acquiring lock for ca certs: {Name:mkab513412908ef55b41438557e8ea33978e0150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:33.419157 1253594 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.key
	I0308 02:58:33.764377 1253594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.crt ...
	I0308 02:58:33.764414 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.crt: {Name:mka3abcc00eaaf2abc6c06778272723fa7615945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:33.764586 1253594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.key ...
	I0308 02:58:33.764598 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.key: {Name:mk2d876913085796b5af769962cc6e24b1f16b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:33.764666 1253594 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.key
	I0308 02:58:33.836453 1253594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.crt ...
	I0308 02:58:33.836482 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.crt: {Name:mk84288a8eaed2375f49d6c0702b43bd4f5c08ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:33.836629 1253594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.key ...
	I0308 02:58:33.836640 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.key: {Name:mk18a23a1ec68618d9d32fb1cb6aa4af87e06bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:33.836705 1253594 certs.go:256] generating profile certs ...
	I0308 02:58:33.836769 1253594 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.key
	I0308 02:58:33.836789 1253594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt with IP's: []
	I0308 02:58:34.103290 1253594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt ...
	I0308 02:58:34.103326 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: {Name:mk540ccba3f8251447117c2919f6bba9c6c31dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.103493 1253594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.key ...
	I0308 02:58:34.103503 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.key: {Name:mk937b0016eecc00549e496706148abe75501d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.103568 1253594 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key.89614b59
	I0308 02:58:34.103588 1253594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt.89614b59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0308 02:58:34.162869 1253594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt.89614b59 ...
	I0308 02:58:34.162900 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt.89614b59: {Name:mk41617eb99b5efb56b48af803be0936320366cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.163041 1253594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key.89614b59 ...
	I0308 02:58:34.163054 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key.89614b59: {Name:mkc81ffeb0f280669973fcf28eda5e7e70cc6351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.163121 1253594 certs.go:381] copying /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt.89614b59 -> /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt
	I0308 02:58:34.163212 1253594 certs.go:385] copying /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key.89614b59 -> /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key
	I0308 02:58:34.163264 1253594 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.key
	I0308 02:58:34.163283 1253594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.crt with IP's: []
	I0308 02:58:34.295369 1253594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.crt ...
	I0308 02:58:34.295405 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.crt: {Name:mk3787e49d2d9f91f7f45d0abf562d6918d02c77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.295571 1253594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.key ...
	I0308 02:58:34.295585 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.key: {Name:mk4348ae0f9fa4fe8d97588fc81878bf07ec4239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:34.295754 1253594 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 02:58:34.295800 1253594 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/ca.pem (1082 bytes)
	I0308 02:58:34.295828 1253594 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/cert.pem (1123 bytes)
	I0308 02:58:34.295852 1253594 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-1245188/.minikube/certs/key.pem (1679 bytes)
	I0308 02:58:34.296475 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 02:58:34.319215 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 02:58:34.340125 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 02:58:34.361144 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 02:58:34.382161 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0308 02:58:34.402963 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 02:58:34.423356 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 02:58:34.443559 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 02:58:34.463629 1253594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-1245188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 02:58:34.483538 1253594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 02:58:34.498359 1253594 ssh_runner.go:195] Run: openssl version
	I0308 02:58:34.503073 1253594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 02:58:34.510964 1253594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 02:58:34.513997 1253594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:58 /usr/share/ca-certificates/minikubeCA.pem
	I0308 02:58:34.514047 1253594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 02:58:34.520066 1253594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 02:58:34.527966 1253594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 02:58:34.530816 1253594 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 02:58:34.530895 1253594 kubeadm.go:391] StartCluster: {Name:addons-096357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-096357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 02:58:34.531016 1253594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 02:58:34.531058 1253594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 02:58:34.563954 1253594 cri.go:89] found id: ""
	I0308 02:58:34.564043 1253594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 02:58:34.572382 1253594 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 02:58:34.580231 1253594 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0308 02:58:34.580285 1253594 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 02:58:34.587874 1253594 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 02:58:34.587892 1253594 kubeadm.go:156] found existing configuration files:
	
	I0308 02:58:34.587939 1253594 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 02:58:34.595132 1253594 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 02:58:34.595179 1253594 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 02:58:34.602351 1253594 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 02:58:34.609545 1253594 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 02:58:34.609651 1253594 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 02:58:34.616510 1253594 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 02:58:34.623708 1253594 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 02:58:34.623745 1253594 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 02:58:34.630647 1253594 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 02:58:34.637675 1253594 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 02:58:34.637724 1253594 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 02:58:34.644523 1253594 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0308 02:58:34.683028 1253594 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 02:58:34.683140 1253594 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 02:58:34.717300 1253594 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0308 02:58:34.717364 1253594 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1053-gcp
	I0308 02:58:34.717392 1253594 kubeadm.go:309] OS: Linux
	I0308 02:58:34.717431 1253594 kubeadm.go:309] CGROUPS_CPU: enabled
	I0308 02:58:34.717519 1253594 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0308 02:58:34.717612 1253594 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0308 02:58:34.717662 1253594 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0308 02:58:34.717711 1253594 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0308 02:58:34.717752 1253594 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0308 02:58:34.717833 1253594 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0308 02:58:34.717911 1253594 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0308 02:58:34.717980 1253594 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0308 02:58:34.776002 1253594 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 02:58:34.776152 1253594 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 02:58:34.776260 1253594 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 02:58:34.970445 1253594 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 02:58:34.973705 1253594 out.go:204]   - Generating certificates and keys ...
	I0308 02:58:34.973810 1253594 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 02:58:34.973887 1253594 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 02:58:35.139783 1253594 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 02:58:35.231744 1253594 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 02:58:35.434839 1253594 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 02:58:35.485293 1253594 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 02:58:35.660888 1253594 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 02:58:35.661038 1253594 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-096357 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0308 02:58:35.923158 1253594 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 02:58:35.923299 1253594 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-096357 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0308 02:58:36.220423 1253594 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 02:58:36.334641 1253594 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 02:58:36.664870 1253594 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 02:58:36.664963 1253594 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 02:58:36.760600 1253594 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 02:58:37.098056 1253594 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 02:58:37.197441 1253594 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 02:58:37.411928 1253594 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 02:58:37.412397 1253594 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 02:58:37.414705 1253594 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 02:58:37.417485 1253594 out.go:204]   - Booting up control plane ...
	I0308 02:58:37.417579 1253594 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 02:58:37.417691 1253594 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 02:58:37.417798 1253594 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 02:58:37.425693 1253594 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 02:58:37.426593 1253594 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 02:58:37.426633 1253594 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 02:58:37.507097 1253594 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 02:58:42.009468 1253594 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502414 seconds
	I0308 02:58:42.009673 1253594 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 02:58:42.020211 1253594 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 02:58:42.539432 1253594 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 02:58:42.539708 1253594 kubeadm.go:309] [mark-control-plane] Marking the node addons-096357 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 02:58:43.048435 1253594 kubeadm.go:309] [bootstrap-token] Using token: r50gpi.1afv8oc1kcg79288
	I0308 02:58:43.049984 1253594 out.go:204]   - Configuring RBAC rules ...
	I0308 02:58:43.050146 1253594 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 02:58:43.054368 1253594 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 02:58:43.072256 1253594 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 02:58:43.074854 1253594 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 02:58:43.077374 1253594 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 02:58:43.080695 1253594 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 02:58:43.089951 1253594 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 02:58:43.285931 1253594 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 02:58:43.458398 1253594 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 02:58:43.459328 1253594 kubeadm.go:309] 
	I0308 02:58:43.459466 1253594 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 02:58:43.459487 1253594 kubeadm.go:309] 
	I0308 02:58:43.459589 1253594 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 02:58:43.459599 1253594 kubeadm.go:309] 
	I0308 02:58:43.459631 1253594 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 02:58:43.459725 1253594 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 02:58:43.459830 1253594 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 02:58:43.459853 1253594 kubeadm.go:309] 
	I0308 02:58:43.459929 1253594 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 02:58:43.459939 1253594 kubeadm.go:309] 
	I0308 02:58:43.459998 1253594 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 02:58:43.460028 1253594 kubeadm.go:309] 
	I0308 02:58:43.460126 1253594 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 02:58:43.460237 1253594 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 02:58:43.460346 1253594 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 02:58:43.460361 1253594 kubeadm.go:309] 
	I0308 02:58:43.460460 1253594 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 02:58:43.460555 1253594 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 02:58:43.460564 1253594 kubeadm.go:309] 
	I0308 02:58:43.460653 1253594 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r50gpi.1afv8oc1kcg79288 \
	I0308 02:58:43.460783 1253594 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1cff8f068d2bc9c711e0cbd73acfe61141d16836e3de4386ac9d96e369e769fb \
	I0308 02:58:43.460826 1253594 kubeadm.go:309] 	--control-plane 
	I0308 02:58:43.460844 1253594 kubeadm.go:309] 
	I0308 02:58:43.460967 1253594 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 02:58:43.460977 1253594 kubeadm.go:309] 
	I0308 02:58:43.461071 1253594 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r50gpi.1afv8oc1kcg79288 \
	I0308 02:58:43.461194 1253594 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1cff8f068d2bc9c711e0cbd73acfe61141d16836e3de4386ac9d96e369e769fb 
	I0308 02:58:43.462993 1253594 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-gcp\n", err: exit status 1
	I0308 02:58:43.463095 1253594 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 02:58:43.463134 1253594 cni.go:84] Creating CNI manager for ""
	I0308 02:58:43.463156 1253594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0308 02:58:43.464944 1253594 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0308 02:58:43.466304 1253594 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0308 02:58:43.470695 1253594 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 02:58:43.470716 1253594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0308 02:58:43.488435 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 02:58:44.219423 1253594 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 02:58:44.219499 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:44.219515 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-096357 minikube.k8s.io/updated_at=2024_03_08T02_58_44_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=addons-096357 minikube.k8s.io/primary=true
	I0308 02:58:44.284823 1253594 ops.go:34] apiserver oom_adj: -16
	I0308 02:58:44.284902 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:44.785189 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:45.285616 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:45.784917 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:46.285240 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:46.785423 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:47.285364 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:47.785786 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:48.285892 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:48.785032 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:49.285431 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:49.784958 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:50.285031 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:50.785687 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:51.285634 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:51.785141 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:52.285471 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:52.785147 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:53.285630 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:53.785295 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:54.285801 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:54.785060 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:55.285647 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:55.785680 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:56.284959 1253594 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:58:56.352249 1253594 kubeadm.go:1106] duration metric: took 12.132802552s to wait for elevateKubeSystemPrivileges
	W0308 02:58:56.352299 1253594 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 02:58:56.352309 1253594 kubeadm.go:393] duration metric: took 21.821420973s to StartCluster
	I0308 02:58:56.352333 1253594 settings.go:142] acquiring lock: {Name:mke0ce76fc205916bb79eabaf8ed113e38eddf4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:56.352467 1253594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-1245188/kubeconfig
	I0308 02:58:56.353035 1253594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/kubeconfig: {Name:mk98e1f656e06fac7ff6c69fb4148cf4fd3984bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:56.353289 1253594 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 02:58:56.353359 1253594 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 02:58:56.354629 1253594 out.go:177] * Verifying Kubernetes components...
	I0308 02:58:56.353429 1253594 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0308 02:58:56.353550 1253594 config.go:182] Loaded profile config "addons-096357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 02:58:56.356164 1253594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 02:58:56.356184 1253594 addons.go:69] Setting yakd=true in profile "addons-096357"
	I0308 02:58:56.356222 1253594 addons.go:234] Setting addon yakd=true in "addons-096357"
	I0308 02:58:56.356247 1253594 addons.go:69] Setting ingress-dns=true in profile "addons-096357"
	I0308 02:58:56.356262 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.356292 1253594 addons.go:234] Setting addon ingress-dns=true in "addons-096357"
	I0308 02:58:56.356342 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.356608 1253594 addons.go:69] Setting default-storageclass=true in profile "addons-096357"
	I0308 02:58:56.356647 1253594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-096357"
	I0308 02:58:56.356856 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.356871 1253594 addons.go:69] Setting gcp-auth=true in profile "addons-096357"
	I0308 02:58:56.356894 1253594 mustload.go:65] Loading cluster: addons-096357
	I0308 02:58:56.356920 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.357088 1253594 config.go:182] Loaded profile config "addons-096357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 02:58:56.357418 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.357646 1253594 addons.go:69] Setting registry=true in profile "addons-096357"
	I0308 02:58:56.357647 1253594 addons.go:69] Setting helm-tiller=true in profile "addons-096357"
	I0308 02:58:56.357687 1253594 addons.go:234] Setting addon registry=true in "addons-096357"
	I0308 02:58:56.357688 1253594 addons.go:234] Setting addon helm-tiller=true in "addons-096357"
	I0308 02:58:56.357687 1253594 addons.go:69] Setting metrics-server=true in profile "addons-096357"
	I0308 02:58:56.357719 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.357719 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.357728 1253594 addons.go:234] Setting addon metrics-server=true in "addons-096357"
	I0308 02:58:56.357764 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.358169 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.358180 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.358231 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.358421 1253594 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-096357"
	I0308 02:58:56.358464 1253594 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-096357"
	I0308 02:58:56.358504 1253594 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-096357"
	I0308 02:58:56.358598 1253594 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-096357"
	I0308 02:58:56.358642 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.358724 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.359125 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.359736 1253594 addons.go:69] Setting storage-provisioner=true in profile "addons-096357"
	I0308 02:58:56.359778 1253594 addons.go:234] Setting addon storage-provisioner=true in "addons-096357"
	I0308 02:58:56.359809 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.360310 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.356863 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.361203 1253594 addons.go:69] Setting ingress=true in profile "addons-096357"
	I0308 02:58:56.362159 1253594 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-096357"
	I0308 02:58:56.362194 1253594 addons.go:69] Setting cloud-spanner=true in profile "addons-096357"
	I0308 02:58:56.369819 1253594 addons.go:234] Setting addon cloud-spanner=true in "addons-096357"
	I0308 02:58:56.369901 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.370416 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.370549 1253594 addons.go:234] Setting addon ingress=true in "addons-096357"
	I0308 02:58:56.370673 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.371191 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.362214 1253594 addons.go:69] Setting inspektor-gadget=true in profile "addons-096357"
	I0308 02:58:56.372101 1253594 addons.go:234] Setting addon inspektor-gadget=true in "addons-096357"
	I0308 02:58:56.372155 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.362229 1253594 addons.go:69] Setting volumesnapshots=true in profile "addons-096357"
	I0308 02:58:56.372278 1253594 addons.go:234] Setting addon volumesnapshots=true in "addons-096357"
	I0308 02:58:56.372365 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.372626 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.372899 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.380005 1253594 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-096357"
	I0308 02:58:56.384583 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.385294 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.397248 1253594 addons.go:234] Setting addon default-storageclass=true in "addons-096357"
	I0308 02:58:56.397310 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.397824 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.400705 1253594 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0308 02:58:56.402252 1253594 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0308 02:58:56.402277 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0308 02:58:56.402473 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.404351 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0308 02:58:56.405577 1253594 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0308 02:58:56.406826 1253594 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0308 02:58:56.406849 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0308 02:58:56.406909 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.408301 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0308 02:58:56.409376 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0308 02:58:56.410921 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0308 02:58:56.412907 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0308 02:58:56.414764 1253594 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0308 02:58:56.416472 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0308 02:58:56.416416 1253594 out.go:177]   - Using image docker.io/registry:2.8.3
	I0308 02:58:56.417909 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0308 02:58:56.421281 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0308 02:58:56.422681 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0308 02:58:56.422704 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0308 02:58:56.422775 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.421678 1253594 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0308 02:58:56.423020 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0308 02:58:56.423075 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.421753 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.425339 1253594 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0308 02:58:56.426635 1253594 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 02:58:56.426655 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 02:58:56.426707 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.434229 1253594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 02:58:56.429011 1253594 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-096357"
	I0308 02:58:56.436311 1253594 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 02:58:56.436417 1253594 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0308 02:58:56.436464 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:58:56.438159 1253594 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0308 02:58:56.441523 1253594 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0308 02:58:56.441542 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0308 02:58:56.444117 1253594 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0308 02:58:56.444138 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0308 02:58:56.444187 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.439531 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 02:58:56.444258 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.439543 1253594 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0308 02:58:56.440024 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:58:56.441750 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.445657 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0308 02:58:56.445676 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0308 02:58:56.445732 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.446487 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.454901 1253594 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0308 02:58:56.452832 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.459507 1253594 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0308 02:58:56.461217 1253594 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0308 02:58:56.462796 1253594 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0308 02:58:56.462826 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0308 02:58:56.462893 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.486197 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.486285 1253594 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 02:58:56.486308 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 02:58:56.486363 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.487023 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.493893 1253594 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0308 02:58:56.495333 1253594 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0308 02:58:56.496604 1253594 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0308 02:58:56.496619 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0308 02:58:56.496670 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.498008 1253594 out.go:177]   - Using image docker.io/busybox:stable
	I0308 02:58:56.497826 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.499737 1253594 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0308 02:58:56.499761 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0308 02:58:56.499819 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.508225 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.509641 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.512953 1253594 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0308 02:58:56.511928 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.513744 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.514495 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0308 02:58:56.514510 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0308 02:58:56.514565 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:58:56.519733 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.521087 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.522689 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.525946 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:58:56.531614 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	W0308 02:58:56.541806 1253594 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0308 02:58:56.541841 1253594 retry.go:31] will retry after 323.578107ms: ssh: handshake failed: EOF
	I0308 02:58:56.758401 1253594 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 02:58:56.946182 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0308 02:58:56.952307 1253594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 02:58:57.035227 1253594 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0308 02:58:57.035266 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0308 02:58:57.036576 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0308 02:58:57.040520 1253594 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 02:58:57.040550 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0308 02:58:57.042575 1253594 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0308 02:58:57.042598 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0308 02:58:57.055668 1253594 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0308 02:58:57.055704 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0308 02:58:57.135860 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0308 02:58:57.136789 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0308 02:58:57.150346 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 02:58:57.241752 1253594 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0308 02:58:57.241790 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0308 02:58:57.242115 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 02:58:57.246161 1253594 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 02:58:57.246191 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 02:58:57.252799 1253594 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0308 02:58:57.252838 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0308 02:58:57.256559 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0308 02:58:57.256585 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0308 02:58:57.336339 1253594 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0308 02:58:57.336371 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0308 02:58:57.336679 1253594 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0308 02:58:57.336707 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0308 02:58:57.343923 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0308 02:58:57.343954 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0308 02:58:57.544702 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0308 02:58:57.552246 1253594 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 02:58:57.552296 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 02:58:57.635254 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0308 02:58:57.646145 1253594 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0308 02:58:57.646201 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0308 02:58:57.647795 1253594 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0308 02:58:57.647835 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0308 02:58:57.737005 1253594 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0308 02:58:57.737097 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0308 02:58:57.742680 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0308 02:58:57.742763 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0308 02:58:57.846563 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 02:58:57.848831 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0308 02:58:57.848860 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0308 02:58:57.936241 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0308 02:58:57.936277 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0308 02:58:57.936471 1253594 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0308 02:58:57.936489 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0308 02:58:58.142166 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0308 02:58:58.142202 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0308 02:58:58.153077 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0308 02:58:58.153108 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0308 02:58:58.335362 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0308 02:58:58.335398 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0308 02:58:58.338706 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0308 02:58:58.343019 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0308 02:58:58.538295 1253594 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0308 02:58:58.538328 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0308 02:58:58.555214 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0308 02:58:58.555253 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0308 02:58:58.734334 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0308 02:58:58.734425 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0308 02:58:58.854642 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0308 02:58:58.854675 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0308 02:58:59.038041 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0308 02:58:59.039349 1253594 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0308 02:58:59.039378 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0308 02:58:59.240225 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0308 02:58:59.240264 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0308 02:58:59.335819 1253594 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0308 02:58:59.335923 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0308 02:58:59.343378 1253594 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.584913952s)
	I0308 02:58:59.343489 1253594 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0308 02:58:59.555123 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0308 02:58:59.555219 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0308 02:58:59.752243 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0308 02:58:59.841841 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0308 02:58:59.841941 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0308 02:59:00.054948 1253594 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-096357" context rescaled to 1 replicas
	I0308 02:59:00.238060 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0308 02:59:00.238092 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0308 02:59:00.549322 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.603098859s)
	I0308 02:59:00.549244 1253594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.596893419s)
	I0308 02:59:00.550646 1253594 node_ready.go:35] waiting up to 6m0s for node "addons-096357" to be "Ready" ...
	I0308 02:59:00.652490 1253594 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0308 02:59:00.652588 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0308 02:59:01.152117 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0308 02:59:02.643845 1253594 node_ready.go:53] node "addons-096357" has status "Ready":"False"
	I0308 02:59:03.146594 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.109971893s)
	I0308 02:59:03.146864 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.010424341s)
	I0308 02:59:03.245769 1253594 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0308 02:59:03.245933 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:59:03.267540 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:59:03.742737 1253594 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0308 02:59:03.849102 1253594 addons.go:234] Setting addon gcp-auth=true in "addons-096357"
	I0308 02:59:03.849174 1253594 host.go:66] Checking if "addons-096357" exists ...
	I0308 02:59:03.849753 1253594 cli_runner.go:164] Run: docker container inspect addons-096357 --format={{.State.Status}}
	I0308 02:59:03.868818 1253594 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0308 02:59:03.868871 1253594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-096357
	I0308 02:59:03.888759 1253594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/addons-096357/id_rsa Username:docker}
	I0308 02:59:04.342520 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.205684212s)
	I0308 02:59:04.342563 1253594 addons.go:470] Verifying addon ingress=true in "addons-096357"
	I0308 02:59:04.342561 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.192170751s)
	I0308 02:59:04.342620 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.100460642s)
	I0308 02:59:04.344031 1253594 out.go:177] * Verifying ingress addon...
	I0308 02:59:04.342667 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.797887363s)
	I0308 02:59:04.342712 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.707424475s)
	I0308 02:59:04.342792 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.496192304s)
	I0308 02:59:04.342873 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.004121014s)
	I0308 02:59:04.342911 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.99984952s)
	I0308 02:59:04.343007 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.304928048s)
	I0308 02:59:04.343163 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.590801598s)
	I0308 02:59:04.345237 1253594 addons.go:470] Verifying addon registry=true in "addons-096357"
	I0308 02:59:04.346457 1253594 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-096357 service yakd-dashboard -n yakd-dashboard
	
	I0308 02:59:04.345346 1253594 addons.go:470] Verifying addon metrics-server=true in "addons-096357"
	W0308 02:59:04.345384 1253594 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0308 02:59:04.346233 1253594 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0308 02:59:04.347710 1253594 out.go:177] * Verifying registry addon...
	I0308 02:59:04.347844 1253594 retry.go:31] will retry after 359.745349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0308 02:59:04.350002 1253594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0308 02:59:04.353518 1253594 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0308 02:59:04.353546 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:04.353851 1253594 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0308 02:59:04.353871 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:04.709495 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0308 02:59:04.851190 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:04.853194 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:05.053623 1253594 node_ready.go:53] node "addons-096357" has status "Ready":"False"
	I0308 02:59:05.178437 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.02620209s)
	I0308 02:59:05.178505 1253594 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.309650717s)
	I0308 02:59:05.178528 1253594 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-096357"
	I0308 02:59:05.180109 1253594 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0308 02:59:05.181629 1253594 out.go:177] * Verifying csi-hostpath-driver addon...
	I0308 02:59:05.182811 1253594 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0308 02:59:05.183917 1253594 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0308 02:59:05.183936 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0308 02:59:05.183386 1253594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0308 02:59:05.240511 1253594 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0308 02:59:05.240536 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:05.254541 1253594 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0308 02:59:05.254566 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0308 02:59:05.272370 1253594 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0308 02:59:05.272397 1253594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0308 02:59:05.288558 1253594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0308 02:59:05.353011 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:05.354836 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:05.740689 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:05.854808 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:05.855311 1253594 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0308 02:59:05.855383 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:06.055894 1253594 node_ready.go:49] node "addons-096357" has status "Ready":"True"
	I0308 02:59:06.055987 1253594 node_ready.go:38] duration metric: took 5.505259164s for node "addons-096357" to be "Ready" ...
	I0308 02:59:06.056005 1253594 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 02:59:06.064816 1253594 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gfwfq" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:06.248505 1253594 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0308 02:59:06.248546 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:06.352461 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:06.355857 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:06.665035 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.955485681s)
	I0308 02:59:06.739911 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:06.854847 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:06.856378 1253594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.567778167s)
	I0308 02:59:06.857453 1253594 addons.go:470] Verifying addon gcp-auth=true in "addons-096357"
	I0308 02:59:06.859058 1253594 out.go:177] * Verifying gcp-auth addon...
	I0308 02:59:06.860349 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:06.861436 1253594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0308 02:59:06.935086 1253594 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0308 02:59:06.935128 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:07.243678 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:07.436035 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:07.438350 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:07.438730 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:07.739276 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:07.852823 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:07.855981 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:07.864819 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:08.140991 1253594 pod_ready.go:102] pod "coredns-5dd5756b68-gfwfq" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:08.238729 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:08.353405 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:08.356932 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:08.435981 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:08.739230 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:08.852720 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:08.854868 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:08.864779 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:09.071062 1253594 pod_ready.go:92] pod "coredns-5dd5756b68-gfwfq" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.071107 1253594 pod_ready.go:81] duration metric: took 3.00626175s for pod "coredns-5dd5756b68-gfwfq" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.071140 1253594 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.075992 1253594 pod_ready.go:92] pod "etcd-addons-096357" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.076018 1253594 pod_ready.go:81] duration metric: took 4.856994ms for pod "etcd-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.076034 1253594 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.138728 1253594 pod_ready.go:92] pod "kube-apiserver-addons-096357" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.138762 1253594 pod_ready.go:81] duration metric: took 62.718253ms for pod "kube-apiserver-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.138778 1253594 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.144575 1253594 pod_ready.go:92] pod "kube-controller-manager-addons-096357" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.144605 1253594 pod_ready.go:81] duration metric: took 5.81623ms for pod "kube-controller-manager-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.144620 1253594 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9q92q" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.149876 1253594 pod_ready.go:92] pod "kube-proxy-9q92q" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.149895 1253594 pod_ready.go:81] duration metric: took 5.268604ms for pod "kube-proxy-9q92q" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.149904 1253594 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.239860 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:09.352758 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:09.355223 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:09.365520 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:09.468393 1253594 pod_ready.go:92] pod "kube-scheduler-addons-096357" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:09.468425 1253594 pod_ready.go:81] duration metric: took 318.513376ms for pod "kube-scheduler-addons-096357" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.468439 1253594 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:09.739549 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:09.853069 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:09.855460 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:09.864950 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:10.190177 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:10.351928 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:10.354537 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:10.364910 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:10.690095 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:10.852473 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:10.854221 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:10.864190 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:11.189534 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:11.353055 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:11.355378 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:11.364502 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:11.474666 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:11.689638 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:11.851980 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:11.854210 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:11.864220 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:12.188753 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:12.351568 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:12.353745 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:12.364938 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:12.689642 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:12.851158 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:12.853871 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:12.864639 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:13.191768 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:13.351759 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:13.354046 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:13.363804 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:13.688923 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:13.852164 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:13.854071 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:13.863849 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:13.973182 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:14.188481 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:14.352600 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:14.355001 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:14.363854 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:14.689798 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:14.852522 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:14.854927 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:14.863890 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:15.189313 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:15.352512 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:15.354896 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:15.364589 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:15.689481 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:15.852471 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:15.854686 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:15.864621 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:16.188930 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:16.352348 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:16.354883 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:16.364051 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:16.473955 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:16.689771 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:16.851725 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:16.853852 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:16.864774 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:17.239423 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:17.353096 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:17.356531 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:17.369560 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:17.738328 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:17.853156 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:17.854405 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:17.864557 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:18.238701 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:18.353182 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:18.355271 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:18.364909 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:18.474941 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:18.689674 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:18.852740 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:18.855785 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:18.865338 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:19.189651 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:19.353364 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:19.355344 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:19.364738 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:19.739543 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:19.853465 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:19.855088 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:19.864517 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:20.189523 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:20.352183 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:20.354963 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:20.364107 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:20.690167 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:20.852546 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:20.855935 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:20.864056 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:20.974521 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:21.191466 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:21.352200 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:21.355300 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:21.364570 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:21.689385 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:21.852242 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:21.854202 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:21.863966 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:22.189722 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:22.352186 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:22.354347 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:22.364093 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:22.689029 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:22.851896 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:22.854171 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:22.864198 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:22.974652 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:23.193095 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:23.352172 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:23.354694 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:23.365009 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:23.690763 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:23.852746 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:23.854407 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:23.864199 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:24.189520 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:24.351220 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:24.353956 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:24.364327 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:24.689616 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:24.852743 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:24.855808 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:24.865122 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:24.974936 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:25.237573 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:25.353047 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:25.361744 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:25.364845 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:25.739641 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:25.854558 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:25.855077 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:25.864425 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:26.190332 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:26.353082 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:26.354744 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:26.364973 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:26.689615 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:26.852911 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:26.854914 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:26.865315 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:26.975072 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:27.191231 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:27.352488 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:27.354885 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:27.364418 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:27.689307 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:27.853080 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:27.855564 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:27.865057 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:28.237501 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:28.353156 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:28.354881 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:28.365119 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:28.689865 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:28.852847 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:28.855021 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:28.864841 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:29.189927 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:29.351963 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:29.354952 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:29.364911 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:29.473999 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:29.689870 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:29.852216 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:29.854441 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:29.864807 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:30.189713 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:30.352056 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:30.354372 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:30.364295 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:30.688765 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:30.851688 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:30.854150 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:30.863932 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:31.190934 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:31.364454 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:31.368530 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:31.369641 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:31.689947 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:31.852249 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:31.854376 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:31.864497 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:31.975113 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:32.235431 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:32.353050 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:32.355733 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:32.364810 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:32.689083 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:32.852941 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:32.854004 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:32.864095 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:33.190271 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:33.352452 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:33.355269 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:33.365648 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:33.689189 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:33.851921 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:33.855198 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:33.864904 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:34.190366 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:34.352499 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:34.354786 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:34.364962 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:34.517118 1253594 pod_ready.go:102] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"False"
	I0308 02:59:34.689437 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:34.853031 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:34.855727 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:34.865340 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:35.037183 1253594 pod_ready.go:92] pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:35.037290 1253594 pod_ready.go:81] duration metric: took 25.568839486s for pod "metrics-server-69cf46c98-tg6kt" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:35.037324 1253594 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5zvrf" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:35.044762 1253594 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-5zvrf" in "kube-system" namespace has status "Ready":"True"
	I0308 02:59:35.044783 1253594 pod_ready.go:81] duration metric: took 7.428266ms for pod "nvidia-device-plugin-daemonset-5zvrf" in "kube-system" namespace to be "Ready" ...
	I0308 02:59:35.044802 1253594 pod_ready.go:38] duration metric: took 28.988777085s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 02:59:35.044822 1253594 api_server.go:52] waiting for apiserver process to appear ...
	I0308 02:59:35.044880 1253594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 02:59:35.058905 1253594 api_server.go:72] duration metric: took 38.705497091s to wait for apiserver process to appear ...
	I0308 02:59:35.058935 1253594 api_server.go:88] waiting for apiserver healthz status ...
	I0308 02:59:35.058961 1253594 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0308 02:59:35.134776 1253594 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0308 02:59:35.136679 1253594 api_server.go:141] control plane version: v1.28.4
	I0308 02:59:35.136762 1253594 api_server.go:131] duration metric: took 77.816353ms to wait for apiserver health ...
	I0308 02:59:35.136786 1253594 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 02:59:35.151096 1253594 system_pods.go:59] 19 kube-system pods found
	I0308 02:59:35.151135 1253594 system_pods.go:61] "coredns-5dd5756b68-gfwfq" [e9e8987b-e511-4f9c-8eb9-92d73278f1a7] Running
	I0308 02:59:35.151145 1253594 system_pods.go:61] "csi-hostpath-attacher-0" [98bb39ee-61e2-47c8-9002-b9865e09e7ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0308 02:59:35.151151 1253594 system_pods.go:61] "csi-hostpath-resizer-0" [a0d8e923-1a02-49be-9da7-cf326b0e555a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0308 02:59:35.151159 1253594 system_pods.go:61] "csi-hostpathplugin-5f6b6" [e11b4aef-50fe-4d0c-ab0a-662cae2679ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0308 02:59:35.151163 1253594 system_pods.go:61] "etcd-addons-096357" [1f7777eb-f55e-49b0-9e0c-6cefc68eab78] Running
	I0308 02:59:35.151167 1253594 system_pods.go:61] "kindnet-2ssjr" [0c3f19c9-bb60-4e8f-9ad6-28624f7b09df] Running
	I0308 02:59:35.151170 1253594 system_pods.go:61] "kube-apiserver-addons-096357" [136bd4e5-d6f7-440c-8429-f5c922d56721] Running
	I0308 02:59:35.151174 1253594 system_pods.go:61] "kube-controller-manager-addons-096357" [2d4f8d7e-f23b-42be-a464-11d943e64069] Running
	I0308 02:59:35.151180 1253594 system_pods.go:61] "kube-ingress-dns-minikube" [178e20ad-93cf-4745-86ce-7befaf053f24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0308 02:59:35.151186 1253594 system_pods.go:61] "kube-proxy-9q92q" [c4dcca8b-90da-4ee6-bb23-f5e5d9e52672] Running
	I0308 02:59:35.151190 1253594 system_pods.go:61] "kube-scheduler-addons-096357" [bd71011b-c413-4c76-b15a-d8946d6fa08a] Running
	I0308 02:59:35.151196 1253594 system_pods.go:61] "metrics-server-69cf46c98-tg6kt" [df94e650-b701-42b9-9c86-8d5351621dcb] Running
	I0308 02:59:35.151201 1253594 system_pods.go:61] "nvidia-device-plugin-daemonset-5zvrf" [0c58fef2-eb9d-48b2-9e64-3481e5407cb2] Running
	I0308 02:59:35.151214 1253594 system_pods.go:61] "registry-6xbnd" [c865bced-8d68-4fe9-9b58-a387fa5d841b] Running
	I0308 02:59:35.151224 1253594 system_pods.go:61] "registry-proxy-b28lv" [53d1e743-7dde-45d5-8caa-7ac196b37d07] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0308 02:59:35.151235 1253594 system_pods.go:61] "snapshot-controller-58dbcc7b99-kkgrn" [0caaf65c-0b3f-4b9e-bbbd-0b9f36217a9e] Running
	I0308 02:59:35.151243 1253594 system_pods.go:61] "snapshot-controller-58dbcc7b99-x89gg" [af8add60-5aaf-4f9e-ad90-e3ba90083d94] Running
	I0308 02:59:35.151247 1253594 system_pods.go:61] "storage-provisioner" [34b1c6c0-cbf8-4e11-a72a-d0c4c2483cb1] Running
	I0308 02:59:35.151253 1253594 system_pods.go:61] "tiller-deploy-7b677967b9-c22n7" [f7d4183c-77c2-4528-b752-df447610d59d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0308 02:59:35.151261 1253594 system_pods.go:74] duration metric: took 14.458313ms to wait for pod list to return data ...
	I0308 02:59:35.151274 1253594 default_sa.go:34] waiting for default service account to be created ...
	I0308 02:59:35.156258 1253594 default_sa.go:45] found service account: "default"
	I0308 02:59:35.156289 1253594 default_sa.go:55] duration metric: took 5.008097ms for default service account to be created ...
	I0308 02:59:35.156300 1253594 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 02:59:35.166235 1253594 system_pods.go:86] 19 kube-system pods found
	I0308 02:59:35.166270 1253594 system_pods.go:89] "coredns-5dd5756b68-gfwfq" [e9e8987b-e511-4f9c-8eb9-92d73278f1a7] Running
	I0308 02:59:35.166283 1253594 system_pods.go:89] "csi-hostpath-attacher-0" [98bb39ee-61e2-47c8-9002-b9865e09e7ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0308 02:59:35.166294 1253594 system_pods.go:89] "csi-hostpath-resizer-0" [a0d8e923-1a02-49be-9da7-cf326b0e555a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0308 02:59:35.166311 1253594 system_pods.go:89] "csi-hostpathplugin-5f6b6" [e11b4aef-50fe-4d0c-ab0a-662cae2679ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0308 02:59:35.166321 1253594 system_pods.go:89] "etcd-addons-096357" [1f7777eb-f55e-49b0-9e0c-6cefc68eab78] Running
	I0308 02:59:35.166330 1253594 system_pods.go:89] "kindnet-2ssjr" [0c3f19c9-bb60-4e8f-9ad6-28624f7b09df] Running
	I0308 02:59:35.166342 1253594 system_pods.go:89] "kube-apiserver-addons-096357" [136bd4e5-d6f7-440c-8429-f5c922d56721] Running
	I0308 02:59:35.166351 1253594 system_pods.go:89] "kube-controller-manager-addons-096357" [2d4f8d7e-f23b-42be-a464-11d943e64069] Running
	I0308 02:59:35.166364 1253594 system_pods.go:89] "kube-ingress-dns-minikube" [178e20ad-93cf-4745-86ce-7befaf053f24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0308 02:59:35.166375 1253594 system_pods.go:89] "kube-proxy-9q92q" [c4dcca8b-90da-4ee6-bb23-f5e5d9e52672] Running
	I0308 02:59:35.166386 1253594 system_pods.go:89] "kube-scheduler-addons-096357" [bd71011b-c413-4c76-b15a-d8946d6fa08a] Running
	I0308 02:59:35.166393 1253594 system_pods.go:89] "metrics-server-69cf46c98-tg6kt" [df94e650-b701-42b9-9c86-8d5351621dcb] Running
	I0308 02:59:35.166403 1253594 system_pods.go:89] "nvidia-device-plugin-daemonset-5zvrf" [0c58fef2-eb9d-48b2-9e64-3481e5407cb2] Running
	I0308 02:59:35.166411 1253594 system_pods.go:89] "registry-6xbnd" [c865bced-8d68-4fe9-9b58-a387fa5d841b] Running
	I0308 02:59:35.166422 1253594 system_pods.go:89] "registry-proxy-b28lv" [53d1e743-7dde-45d5-8caa-7ac196b37d07] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0308 02:59:35.166432 1253594 system_pods.go:89] "snapshot-controller-58dbcc7b99-kkgrn" [0caaf65c-0b3f-4b9e-bbbd-0b9f36217a9e] Running
	I0308 02:59:35.166444 1253594 system_pods.go:89] "snapshot-controller-58dbcc7b99-x89gg" [af8add60-5aaf-4f9e-ad90-e3ba90083d94] Running
	I0308 02:59:35.166452 1253594 system_pods.go:89] "storage-provisioner" [34b1c6c0-cbf8-4e11-a72a-d0c4c2483cb1] Running
	I0308 02:59:35.166466 1253594 system_pods.go:89] "tiller-deploy-7b677967b9-c22n7" [f7d4183c-77c2-4528-b752-df447610d59d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0308 02:59:35.166480 1253594 system_pods.go:126] duration metric: took 10.171169ms to wait for k8s-apps to be running ...
	I0308 02:59:35.166494 1253594 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 02:59:35.166551 1253594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 02:59:35.179800 1253594 system_svc.go:56] duration metric: took 13.294028ms WaitForService to wait for kubelet
	I0308 02:59:35.179838 1253594 kubeadm.go:576] duration metric: took 38.826436526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 02:59:35.179864 1253594 node_conditions.go:102] verifying NodePressure condition ...
	I0308 02:59:35.237246 1253594 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0308 02:59:35.237279 1253594 node_conditions.go:123] node cpu capacity is 8
	I0308 02:59:35.237292 1253594 node_conditions.go:105] duration metric: took 57.42421ms to run NodePressure ...
	I0308 02:59:35.237305 1253594 start.go:240] waiting for startup goroutines ...
	I0308 02:59:35.239411 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:35.353008 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:35.355204 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:35.365702 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:35.689968 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:35.852842 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:35.854753 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:35.865523 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:36.189292 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:36.352668 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:36.354725 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:36.365007 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:36.689412 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:36.852665 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:36.854736 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:36.864661 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:37.190912 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:37.352992 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:37.355316 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:37.364373 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:37.689519 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:37.851955 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:37.853919 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:37.864106 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:38.189457 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:38.353074 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:38.354614 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:38.365340 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:38.689776 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:38.852551 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:38.854684 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:38.864796 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:39.189533 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:39.351806 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:39.355903 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:39.364850 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:39.690426 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:39.852920 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:39.855282 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:39.864809 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:40.239998 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:40.353687 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:40.356575 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:40.365347 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:40.690008 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:40.852521 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:40.854947 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:40.865915 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:41.190242 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:41.352376 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:41.354607 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:41.364449 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:41.690094 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:41.852145 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:41.855372 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:41.865252 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:42.189340 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:42.352698 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:42.355011 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:42.364041 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:42.689366 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:42.852382 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:42.854045 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:42.864948 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:43.189130 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:43.353077 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:43.354579 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:43.364625 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:43.690315 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:43.852050 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:43.855013 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:43.864483 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:44.240999 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:44.353206 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:44.355430 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:44.364972 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:44.690885 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:44.909043 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:44.909495 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:44.909648 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:45.188904 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:45.352732 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:45.354635 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:45.364863 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:45.690179 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:45.852906 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:45.855615 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:45.865113 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:46.189884 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:46.352893 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:46.356549 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:46.364917 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:46.690295 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:46.852466 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:46.854453 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:46.865321 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:47.190118 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:47.351849 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:47.355010 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:47.365297 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:47.689853 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:47.851931 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:47.854105 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:47.864503 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:48.189256 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:48.352746 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:48.354694 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:48.364959 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:48.688764 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:48.851996 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:48.854090 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:48.864450 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:49.195362 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:49.352772 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:49.355071 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:49.364234 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:49.689066 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:49.852198 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:49.854089 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:49.863829 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:50.189976 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:50.352343 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:50.354368 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:50.364450 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:50.689758 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:50.852728 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:50.854785 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:50.864873 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:51.190317 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:51.354243 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:51.355092 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:51.364772 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:51.690030 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:51.851908 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:51.854010 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:51.863778 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:52.189701 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:52.352355 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:52.354733 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:52.364774 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:52.689604 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:52.851905 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:52.854358 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:52.864245 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:53.237782 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:53.353818 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:53.356127 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:53.365108 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:53.739107 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:53.852304 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:53.855400 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:53.865092 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:54.189556 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:54.352079 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:54.355758 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:54.365562 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:54.689437 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:54.851539 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:54.854425 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:54.864688 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:55.189441 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:55.352803 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:55.355107 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:55.364131 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:55.688808 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:55.851879 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:55.854033 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:55.864382 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:56.189069 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:56.352308 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:56.354501 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:56.364478 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:56.689657 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:56.852852 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:56.855019 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:56.864015 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:57.190336 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:57.352587 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:57.354734 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:57.365185 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:57.690283 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:57.852666 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:57.859386 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:57.864831 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:58.189916 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:58.352619 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:58.354591 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:58.365395 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:58.690689 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:58.852917 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:58.854827 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:58.864933 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:59.190080 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:59.353352 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:59.355238 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:59.364506 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:59:59.689467 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:59:59.853787 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:59:59.854429 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:59:59.864503 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:00.189495 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:00.352847 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:00.354757 1253594 kapi.go:107] duration metric: took 56.004759939s to wait for kubernetes.io/minikube-addons=registry ...
	I0308 03:00:00.364969 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:00.690577 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:00.852351 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:00.864435 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:01.191287 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:01.352595 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:01.365374 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:01.689900 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:01.852723 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:01.865098 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:02.190492 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:02.353198 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:02.365348 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:02.689081 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:02.852144 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:02.864970 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:03.188711 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:03.352110 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:03.364703 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:03.689354 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:03.852127 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:03.864668 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:04.189808 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:04.351918 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:04.364598 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:04.689344 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:04.852317 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:04.864264 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:05.188918 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:05.351872 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:05.365135 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:05.689805 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:05.853429 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:05.866072 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:06.190211 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:06.352306 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:06.364864 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:06.738787 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:06.853454 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:06.936000 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:07.330358 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:07.486137 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:07.486419 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:07.741545 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:07.853177 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:07.937897 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:08.239533 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:08.353507 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:08.365261 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:08.739844 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:08.852363 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:08.865403 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:09.189524 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:09.352689 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:09.364787 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:09.689656 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:09.852089 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:09.865575 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:10.189407 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:10.352925 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:10.365549 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:10.689579 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:10.852709 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:10.865633 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:11.189640 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:11.352485 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:11.365548 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:11.689525 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:11.852894 1253594 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 03:00:11.865605 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:12.239871 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:12.352462 1253594 kapi.go:107] duration metric: took 1m8.006226631s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0308 03:00:12.365457 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:12.768814 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:12.865008 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:13.189846 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:13.365578 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:13.689560 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:13.864931 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:14.190272 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:14.365088 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:14.688823 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:14.865335 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:15.192585 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:15.365468 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:15.690307 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:15.864887 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 03:00:16.190432 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:16.366123 1253594 kapi.go:107] duration metric: took 1m9.504682364s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0308 03:00:16.368370 1253594 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-096357 cluster.
	I0308 03:00:16.369724 1253594 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0308 03:00:16.371016 1253594 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0308 03:00:16.689546 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:17.190002 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:17.691726 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:18.190488 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:18.689686 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:19.188882 1253594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 03:00:19.689125 1253594 kapi.go:107] duration metric: took 1m14.505736463s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0308 03:00:19.690966 1253594 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, ingress-dns, inspektor-gadget, helm-tiller, yakd, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0308 03:00:19.692142 1253594 addons.go:505] duration metric: took 1m23.338716824s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner-rancher storage-provisioner ingress-dns inspektor-gadget helm-tiller yakd metrics-server default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0308 03:00:19.692179 1253594 start.go:245] waiting for cluster config update ...
	I0308 03:00:19.692199 1253594 start.go:254] writing updated cluster config ...
	I0308 03:00:19.692478 1253594 ssh_runner.go:195] Run: rm -f paused
	I0308 03:00:19.739146 1253594 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 03:00:19.741951 1253594 out.go:177] * Done! kubectl is now configured to use "addons-096357" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 08 03:00:38 addons-096357 crio[962]: time="2024-03-08 03:00:38.286543494Z" level=info msg="Stopping pod sandbox: b6be98d1fb9478121a642b2fc03fbb873a4795c1e576f6b57b2e1dc4936f2e4e" id=f39bbd1d-9341-4a47-9d8d-ddfb7bc1aacc name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 08 03:00:38 addons-096357 crio[962]: time="2024-03-08 03:00:38.286795783Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:b6be98d1fb9478121a642b2fc03fbb873a4795c1e576f6b57b2e1dc4936f2e4e UID:9c6b440d-9d03-4309-9c48-3a14a50d55cf NetNS:/var/run/netns/229fad2c-0362-40b9-8956-e42fba67cf27 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Mar 08 03:00:38 addons-096357 crio[962]: time="2024-03-08 03:00:38.286908719Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Mar 08 03:00:38 addons-096357 crio[962]: time="2024-03-08 03:00:38.324249885Z" level=info msg="Stopped pod sandbox: b6be98d1fb9478121a642b2fc03fbb873a4795c1e576f6b57b2e1dc4936f2e4e" id=f39bbd1d-9341-4a47-9d8d-ddfb7bc1aacc name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 08 03:00:38 addons-096357 crio[962]: time="2024-03-08 03:00:38.379216695Z" level=info msg="Trying to access \"docker.io/library/busybox:stable\""
	Mar 08 03:00:38 addons-096357 crio[962]: time="2024-03-08 03:00:38.973618597Z" level=info msg="Stopping container: e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d (timeout: 30s)" id=9023e620-181f-469f-b736-e4671a5c5b00 name=/runtime.v1.RuntimeService/StopContainer
	Mar 08 03:00:38 addons-096357 conmon[3989]: conmon e317781ae79d61c82b23 <ninfo>: container 4002 exited with status 2
	Mar 08 03:00:38 addons-096357 crio[962]: time="2024-03-08 03:00:38.992706114Z" level=info msg="Stopping container: 7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7 (timeout: 30s)" id=44bf00ff-6566-40ba-b0bb-04ebf87e13cb name=/runtime.v1.RuntimeService/StopContainer
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.108905951Z" level=info msg="Stopped container e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d: kube-system/registry-6xbnd/registry" id=9023e620-181f-469f-b736-e4671a5c5b00 name=/runtime.v1.RuntimeService/StopContainer
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.109483884Z" level=info msg="Stopping pod sandbox: f7ba61238d8a34b6b024c5f6d568417e89319007e64f11dff3bb47342e39c865" id=2aaa843f-2a2b-4d63-a105-14b2d35d3752 name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.109781328Z" level=info msg="Got pod network &{Name:registry-6xbnd Namespace:kube-system ID:f7ba61238d8a34b6b024c5f6d568417e89319007e64f11dff3bb47342e39c865 UID:c865bced-8d68-4fe9-9b58-a387fa5d841b NetNS:/var/run/netns/f3bb177d-3ced-40f5-a91a-dd4d25225b5a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.109919434Z" level=info msg="Deleting pod kube-system_registry-6xbnd from CNI network \"kindnet\" (type=ptp)"
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.143378278Z" level=info msg="Stopped pod sandbox: f7ba61238d8a34b6b024c5f6d568417e89319007e64f11dff3bb47342e39c865" id=2aaa843f-2a2b-4d63-a105-14b2d35d3752 name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.165918443Z" level=info msg="Stopped container 7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7: kube-system/registry-proxy-b28lv/registry-proxy" id=44bf00ff-6566-40ba-b0bb-04ebf87e13cb name=/runtime.v1.RuntimeService/StopContainer
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.166402622Z" level=info msg="Stopping pod sandbox: e68b234ac0e8ad1c8a4f612711529b0d68d5dd14444cac5863d7d40bd1a57fb6" id=ff0693c9-8596-422b-bc67-5e59c577e05a name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.169490319Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-NPV5SHYZ4VREV62C - [0:0]\n:KUBE-HP-LFJ2YQSWM32S3ROQ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-MYNAHFL75YC74ENQ - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-76dc478dd8-zsh28_ingress-nginx_da2d5741-caea-41c5-ace3-c4d20e28c595_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-LFJ2YQSWM32S3ROQ\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-76dc478dd8-zsh28_ingress-nginx_da2d5741-caea-41c5-ace3-c4d20e28c595_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-NPV5SHYZ4VREV62C\n-A KUBE-HP-LFJ2YQSWM32S3ROQ -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-76dc478dd8-zsh28_ingress-nginx_da2d5741-caea-41c5-ace3-c4d20e28c595_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-LFJ2YQSWM32S3ROQ -p tcp -m comment --comment \"k8s_ingress-nginx-controller-76dc478dd8-zsh28_ingress-nginx_da2d5741-caea-41c
5-ace3-c4d20e28c595_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.20:443\n-A KUBE-HP-NPV5SHYZ4VREV62C -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-76dc478dd8-zsh28_ingress-nginx_da2d5741-caea-41c5-ace3-c4d20e28c595_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-NPV5SHYZ4VREV62C -p tcp -m comment --comment \"k8s_ingress-nginx-controller-76dc478dd8-zsh28_ingress-nginx_da2d5741-caea-41c5-ace3-c4d20e28c595_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.20:80\n-X KUBE-HP-MYNAHFL75YC74ENQ\nCOMMIT\n"
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.171982526Z" level=info msg="Closing host port tcp:5000"
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.173422207Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.173650739Z" level=info msg="Got pod network &{Name:registry-proxy-b28lv Namespace:kube-system ID:e68b234ac0e8ad1c8a4f612711529b0d68d5dd14444cac5863d7d40bd1a57fb6 UID:53d1e743-7dde-45d5-8caa-7ac196b37d07 NetNS:/var/run/netns/c0d36755-f34e-4333-9cdb-059bb2fc684a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.173833162Z" level=info msg="Deleting pod kube-system_registry-proxy-b28lv from CNI network \"kindnet\" (type=ptp)"
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.206843339Z" level=info msg="Stopped pod sandbox: e68b234ac0e8ad1c8a4f612711529b0d68d5dd14444cac5863d7d40bd1a57fb6" id=ff0693c9-8596-422b-bc67-5e59c577e05a name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.292115611Z" level=info msg="Removing container: e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d" id=b95de52f-e4f4-4d59-9d40-7a4d832e4b7d name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.307375670Z" level=info msg="Removed container e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d: kube-system/registry-6xbnd/registry" id=b95de52f-e4f4-4d59-9d40-7a4d832e4b7d name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.309212319Z" level=info msg="Removing container: 7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7" id=2638b6a8-50ff-4721-8379-8559c40b44bd name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 08 03:00:39 addons-096357 crio[962]: time="2024-03-08 03:00:39.339150542Z" level=info msg="Removed container 7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7: kube-system/registry-proxy-b28lv/registry-proxy" id=2638b6a8-50ff-4721-8379-8559c40b44bd name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	c367519589b2b       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          3 seconds ago        Exited              registry-test                            0                   b6be98d1fb947       registry-test
	3f0a679acb6a4       docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                5 seconds ago        Exited              helm-test                                0                   bc6661099bdd8       helm-test
	bea7419e0d6f7       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            9 seconds ago        Exited              helper-pod                               0                   b1cf5f6c60e28       helper-pod-create-pvc-30d0ffaf-920e-479b-bbb8-f54aaa1f5b7e
	ee87f4510f964       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:b2172c0331da19fc6b7c6076309cc78e3b7e2a3d220a6ea966a6b74b4fb471df                            12 seconds ago       Exited              gadget                                   2                   61d018671c448       gadget-pqs6g
	c632f0ca28a17       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          21 seconds ago       Running             csi-snapshotter                          0                   b348535847a6d       csi-hostpathplugin-5f6b6
	779e547f540c3       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          22 seconds ago       Running             csi-provisioner                          0                   b348535847a6d       csi-hostpathplugin-5f6b6
	375033efa75a9       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            24 seconds ago       Running             liveness-probe                           0                   b348535847a6d       csi-hostpathplugin-5f6b6
	df80420c66e1b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                                 24 seconds ago       Running             gcp-auth                                 0                   8e13906c4e71b       gcp-auth-5f6b4f85fd-dg67t
	bbf5c77dafa6d       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                                             26 seconds ago       Exited              patch                                    2                   fa31a0b4dedf6       gcp-auth-certs-patch-rnm87
	e158a0d6c92fd       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           28 seconds ago       Running             hostpath                                 0                   b348535847a6d       csi-hostpathplugin-5f6b6
	05b0f45d1f5bd       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             29 seconds ago       Running             controller                               0                   d004e293b57af       ingress-nginx-controller-76dc478dd8-zsh28
	a4e5132c4096e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                34 seconds ago       Running             node-driver-registrar                    0                   b348535847a6d       csi-hostpathplugin-5f6b6
	125d580a6fc59       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023                   38 seconds ago       Exited              create                                   0                   5e7ba42e4c1f7       gcp-auth-certs-create-8b7ld
	f4f631e73a3b3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             38 seconds ago       Running             csi-attacher                             0                   3d010be0d39db       csi-hostpath-attacher-0
	5688e98a0d1fd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              39 seconds ago       Running             csi-resizer                              0                   81437aa0d64c0       csi-hostpath-resizer-0
	ef599eb7b2558       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             48 seconds ago       Running             local-path-provisioner                   0                   1628406414a01       local-path-provisioner-78b46b4d5c-zlskr
	5b638f3101a87       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             53 seconds ago       Running             minikube-ingress-dns                     0                   4a8764ca7d287       kube-ingress-dns-minikube
	2806bf3b279c8       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15                               58 seconds ago       Running             cloud-spanner-emulator                   0                   00c5e50b6f309       cloud-spanner-emulator-6548d5df46-x848v
	3b00101f24507       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   b348535847a6d       csi-hostpathplugin-5f6b6
	79ec8f6399a18       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                                             About a minute ago   Exited              patch                                    1                   d652b751dcf38       ingress-nginx-admission-patch-fk5tc
	e53aedfb57474       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023                   About a minute ago   Exited              create                                   0                   dfeca1080201b       ingress-nginx-admission-create-sgbgl
	017a6adea13b1       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   86dec47786dca       snapshot-controller-58dbcc7b99-kkgrn
	e10fb1994ad0a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   cfc449cafeb47       snapshot-controller-58dbcc7b99-x89gg
	6cdd7dfe089d0       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                              About a minute ago   Running             yakd                                     0                   0162bf33e0422       yakd-dashboard-9947fc6bf-cfg2l
	467ed4f177ffa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   110c5c6bcb9c3       storage-provisioner
	d1c9c9a01709d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago   Running             coredns                                  0                   730b774f37ea7       coredns-5dd5756b68-gfwfq
	b8d8e1a75a18a       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988                                           About a minute ago   Running             kindnet-cni                              0                   abda6b79092c0       kindnet-2ssjr
	33835a3123b7d       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                                             About a minute ago   Running             kube-proxy                               0                   0d9bcdcdf3e9d       kube-proxy-9q92q
	c3cd2cdf16085       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                                             2 minutes ago        Running             kube-controller-manager                  0                   9468058e62d41       kube-controller-manager-addons-096357
	b08667073714f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago        Running             etcd                                     0                   e1b41a4f4f9e3       etcd-addons-096357
	5842eb5e64906       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                                             2 minutes ago        Running             kube-apiserver                           0                   d64e3be0c28c9       kube-apiserver-addons-096357
	31d138d7bbf75       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                                             2 minutes ago        Running             kube-scheduler                           0                   dde4a55ef60df       kube-scheduler-addons-096357
	
	
	==> coredns [d1c9c9a01709da56a298714930958b00f5f3151c5f4e702a5df9e18d695b48ae] <==
	[INFO] 10.244.0.14:43864 - 35827 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099645s
	[INFO] 10.244.0.14:38013 - 41511 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00355212s
	[INFO] 10.244.0.14:38013 - 14122 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004372298s
	[INFO] 10.244.0.14:33686 - 15745 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004237256s
	[INFO] 10.244.0.14:33686 - 40068 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005051536s
	[INFO] 10.244.0.14:34309 - 24039 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004049101s
	[INFO] 10.244.0.14:34309 - 2533 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004152533s
	[INFO] 10.244.0.14:56419 - 7099 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000061711s
	[INFO] 10.244.0.14:56419 - 29625 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087735s
	[INFO] 10.244.0.21:40050 - 31143 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000234412s
	[INFO] 10.244.0.21:38298 - 16160 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000309339s
	[INFO] 10.244.0.21:54498 - 53107 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131001s
	[INFO] 10.244.0.21:60564 - 3516 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164732s
	[INFO] 10.244.0.21:41263 - 14095 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134863s
	[INFO] 10.244.0.21:34588 - 414 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000676334s
	[INFO] 10.244.0.21:46411 - 9418 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007202391s
	[INFO] 10.244.0.21:55919 - 12170 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.009295506s
	[INFO] 10.244.0.21:35702 - 52201 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005944512s
	[INFO] 10.244.0.21:35436 - 4571 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00737081s
	[INFO] 10.244.0.21:44427 - 22182 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005052488s
	[INFO] 10.244.0.21:48647 - 49063 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00580834s
	[INFO] 10.244.0.21:35358 - 25288 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000701367s
	[INFO] 10.244.0.21:40245 - 42800 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000751794s
	[INFO] 10.244.0.24:46746 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000179215s
	[INFO] 10.244.0.24:55382 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000162405s
	
	
	==> describe nodes <==
	Name:               addons-096357
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-096357
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=addons-096357
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T02_58_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-096357
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-096357"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 02:58:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-096357
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:00:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:00:15 +0000   Fri, 08 Mar 2024 02:58:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:00:15 +0000   Fri, 08 Mar 2024 02:58:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:00:15 +0000   Fri, 08 Mar 2024 02:58:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:00:15 +0000   Fri, 08 Mar 2024 02:59:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-096357
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 5d87022a679f418187d5ddef2a1c9837
	  System UUID:                f68cf2d4-66b4-4770-975b-c3f6179239f2
	  Boot ID:                    a24da1d7-0c05-43c1-a2f9-39bce5338f15
	  Kernel Version:             5.15.0-1053-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-x848v      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  default                     test-local-path                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  gadget                      gadget-pqs6g                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  gcp-auth                    gcp-auth-5f6b4f85fd-dg67t                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  ingress-nginx               ingress-nginx-controller-76dc478dd8-zsh28    100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         96s
	  kube-system                 coredns-5dd5756b68-gfwfq                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     104s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 csi-hostpathplugin-5f6b6                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 etcd-addons-096357                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kindnet-2ssjr                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      104s
	  kube-system                 kube-apiserver-addons-096357                 250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-addons-096357        200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-9q92q                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-addons-096357                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 snapshot-controller-58dbcc7b99-kkgrn         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 snapshot-controller-58dbcc7b99-x89gg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  local-path-storage          local-path-provisioner-78b46b4d5c-zlskr      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-cfg2l               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             438Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 100s  kube-proxy       
	  Normal  Starting                 117s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s  kubelet          Node addons-096357 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s  kubelet          Node addons-096357 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s  kubelet          Node addons-096357 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           105s  node-controller  Node addons-096357 event: Registered Node addons-096357 in Controller
	  Normal  NodeReady                95s   kubelet          Node addons-096357 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a f0 39 7d dd 34 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 21 ed 97 55 ec 08 06
	[ +10.401792] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 9e 93 34 cc 51 08 06
	[  +0.101014] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 3c 0d 5f c2 28 08 06
	[ +14.542690] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d0 74 61 d8 ec 08 06
	[  +0.000327] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 3c 0d 5f c2 28 08 06
	[Mar 8 02:54] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 3d fc bc 20 95 08 06
	[  +0.388951] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 3d fc bc 20 95 08 06
	[  +0.072677] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 9e 0e c5 9a 10 08 06
	[ +13.018318] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 cb 46 a2 36 1f 08 06
	[  +0.000336] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9e 0e c5 9a 10 08 06
	[  +3.877840] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 90 f7 9c ef 79 08 06
	[  +0.000304] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 72 7e 58 7d cc 71 08 06
	
	
	==> etcd [b08667073714f119924d5d083d68f7695bbc35278e39c94db60eac785faac00a] <==
	{"level":"info","ts":"2024-03-08T02:59:02.152859Z","caller":"traceutil/trace.go:171","msg":"trace[435026314] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"104.122857ms","start":"2024-03-08T02:59:02.048718Z","end":"2024-03-08T02:59:02.152841Z","steps":["trace[435026314] 'process raft request'  (duration: 95.743543ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.153189Z","caller":"traceutil/trace.go:171","msg":"trace[356450203] linearizableReadLoop","detail":"{readStateIndex:453; appliedIndex:450; }","duration":"100.595736ms","start":"2024-03-08T02:59:02.052576Z","end":"2024-03-08T02:59:02.153172Z","steps":["trace[356450203] 'read index received'  (duration: 91.847269ms)","trace[356450203] 'applied index is now lower than readState.Index'  (duration: 8.746971ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-08T02:59:02.153267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.41356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/local-path-provisioner-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T02:59:02.154018Z","caller":"traceutil/trace.go:171","msg":"trace[1435681777] range","detail":"{range_begin:/registry/clusterroles/local-path-provisioner-role; range_end:; response_count:0; response_revision:446; }","duration":"105.173252ms","start":"2024-03-08T02:59:02.04883Z","end":"2024-03-08T02:59:02.154003Z","steps":["trace[1435681777] 'agreement among raft nodes before linearized reading'  (duration: 104.388721ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.153295Z","caller":"traceutil/trace.go:171","msg":"trace[1719164322] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"104.527434ms","start":"2024-03-08T02:59:02.048761Z","end":"2024-03-08T02:59:02.153288Z","steps":["trace[1719164322] 'process raft request'  (duration: 103.294673ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.153405Z","caller":"traceutil/trace.go:171","msg":"trace[1413678497] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"104.626113ms","start":"2024-03-08T02:59:02.04877Z","end":"2024-03-08T02:59:02.153396Z","steps":["trace[1413678497] 'process raft request'  (duration: 103.333876ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:59:02.952296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.38559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/local-path-storage/local-path-config\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T02:59:02.955077Z","caller":"traceutil/trace.go:171","msg":"trace[294861755] range","detail":"{range_begin:/registry/configmaps/local-path-storage/local-path-config; range_end:; response_count:0; response_revision:519; }","duration":"105.178803ms","start":"2024-03-08T02:59:02.84988Z","end":"2024-03-08T02:59:02.955059Z","steps":["trace[294861755] 'agreement among raft nodes before linearized reading'  (duration: 98.058861ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:59:02.952515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.814934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5307"}
	{"level":"warn","ts":"2024-03-08T02:59:02.952447Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.75464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/ingress-nginx\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-03-08T02:59:02.95283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.13843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/local-path-storage/local-path-provisioner\" ","response":"range_response_count:1 size:3551"}
	{"level":"warn","ts":"2024-03-08T02:59:02.952781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.840944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T02:59:02.95557Z","caller":"traceutil/trace.go:171","msg":"trace[601603887] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io; range_end:; response_count:0; response_revision:519; }","duration":"105.625927ms","start":"2024-03-08T02:59:02.849929Z","end":"2024-03-08T02:59:02.955555Z","steps":["trace[601603887] 'agreement among raft nodes before linearized reading'  (duration: 98.002855ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.955801Z","caller":"traceutil/trace.go:171","msg":"trace[173580253] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:519; }","duration":"106.09433ms","start":"2024-03-08T02:59:02.849695Z","end":"2024-03-08T02:59:02.955789Z","steps":["trace[173580253] 'agreement among raft nodes before linearized reading'  (duration: 98.171513ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.955975Z","caller":"traceutil/trace.go:171","msg":"trace[329350123] range","detail":"{range_begin:/registry/namespaces/ingress-nginx; range_end:; response_count:0; response_revision:519; }","duration":"106.290247ms","start":"2024-03-08T02:59:02.849673Z","end":"2024-03-08T02:59:02.955964Z","steps":["trace[329350123] 'agreement among raft nodes before linearized reading'  (duration: 98.776639ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:02.956137Z","caller":"traceutil/trace.go:171","msg":"trace[167553663] range","detail":"{range_begin:/registry/deployments/local-path-storage/local-path-provisioner; range_end:; response_count:1; response_revision:519; }","duration":"106.445429ms","start":"2024-03-08T02:59:02.84968Z","end":"2024-03-08T02:59:02.956125Z","steps":["trace[167553663] 'agreement among raft nodes before linearized reading'  (duration: 98.227907ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:45.14087Z","caller":"traceutil/trace.go:171","msg":"trace[409699703] transaction","detail":"{read_only:false; response_revision:1023; number_of_response:1; }","duration":"218.251772ms","start":"2024-03-08T02:59:44.922576Z","end":"2024-03-08T02:59:45.140828Z","steps":["trace[409699703] 'process raft request'  (duration: 131.611285ms)","trace[409699703] 'compare'  (duration: 86.48519ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T03:00:07.250531Z","caller":"traceutil/trace.go:171","msg":"trace[939714007] transaction","detail":"{read_only:false; response_revision:1121; number_of_response:1; }","duration":"107.265604ms","start":"2024-03-08T03:00:07.143243Z","end":"2024-03-08T03:00:07.250508Z","steps":["trace[939714007] 'process raft request'  (duration: 107.13888ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:00:07.326252Z","caller":"traceutil/trace.go:171","msg":"trace[913018281] transaction","detail":"{read_only:false; response_revision:1122; number_of_response:1; }","duration":"175.103806ms","start":"2024-03-08T03:00:07.151118Z","end":"2024-03-08T03:00:07.326222Z","steps":["trace[913018281] 'process raft request'  (duration: 174.909796ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:00:07.483925Z","caller":"traceutil/trace.go:171","msg":"trace[834842879] linearizableReadLoop","detail":"{readStateIndex:1157; appliedIndex:1156; }","duration":"133.539853ms","start":"2024-03-08T03:00:07.350368Z","end":"2024-03-08T03:00:07.483908Z","steps":["trace[834842879] 'read index received'  (duration: 105.009507ms)","trace[834842879] 'applied index is now lower than readState.Index'  (duration: 28.52927ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-08T03:00:07.4841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.735268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14575"}
	{"level":"warn","ts":"2024-03-08T03:00:07.484098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.424788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11262"}
	{"level":"info","ts":"2024-03-08T03:00:07.484143Z","caller":"traceutil/trace.go:171","msg":"trace[2138155592] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1122; }","duration":"133.797907ms","start":"2024-03-08T03:00:07.350333Z","end":"2024-03-08T03:00:07.484131Z","steps":["trace[2138155592] 'agreement among raft nodes before linearized reading'  (duration: 133.668944ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:00:07.484149Z","caller":"traceutil/trace.go:171","msg":"trace[1059642713] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1122; }","duration":"119.490028ms","start":"2024-03-08T03:00:07.364649Z","end":"2024-03-08T03:00:07.484139Z","steps":["trace[1059642713] 'agreement among raft nodes before linearized reading'  (duration: 119.3684ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:00:12.764756Z","caller":"traceutil/trace.go:171","msg":"trace[1123586567] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"121.86086ms","start":"2024-03-08T03:00:12.642834Z","end":"2024-03-08T03:00:12.764695Z","steps":["trace[1123586567] 'process raft request'  (duration: 53.669189ms)","trace[1123586567] 'compare'  (duration: 68.03382ms)"],"step_count":2}
	
	
	==> gcp-auth [df80420c66e1bbef5aaa5cc0f8e23f73fc7b4d1008af650e2aadc37c8de7809d] <==
	2024/03/08 03:00:15 GCP Auth Webhook started!
	2024/03/08 03:00:26 Ready to marshal response ...
	2024/03/08 03:00:26 Ready to write response ...
	2024/03/08 03:00:26 Ready to marshal response ...
	2024/03/08 03:00:26 Ready to write response ...
	2024/03/08 03:00:30 Ready to marshal response ...
	2024/03/08 03:00:30 Ready to write response ...
	2024/03/08 03:00:30 Ready to marshal response ...
	2024/03/08 03:00:30 Ready to write response ...
	
	
	==> kernel <==
	 03:00:40 up  5:43,  0 users,  load average: 1.41, 1.64, 2.01
	Linux addons-096357 5.15.0-1053-gcp #61~20.04.1-Ubuntu SMP Mon Feb 26 16:50:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [b8d8e1a75a18a953d4b83284c434b026b3dcfb7f58857a28d4f41e0d9c85aac6] <==
	I0308 02:59:04.643332       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0308 02:59:04.643548       1 main.go:116] setting mtu 1500 for CNI 
	I0308 02:59:04.733682       1 main.go:146] kindnetd IP family: "ipv4"
	I0308 02:59:04.733763       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0308 02:59:05.040875       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 02:59:05.040917       1 main.go:227] handling current node
	I0308 02:59:15.148675       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 02:59:15.148700       1 main.go:227] handling current node
	I0308 02:59:25.161149       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 02:59:25.161175       1 main.go:227] handling current node
	I0308 02:59:35.165386       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 02:59:35.165419       1 main.go:227] handling current node
	I0308 02:59:45.177699       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 02:59:45.177728       1 main.go:227] handling current node
	I0308 02:59:55.181005       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 02:59:55.181031       1 main.go:227] handling current node
	I0308 03:00:05.192375       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:00:05.192400       1 main.go:227] handling current node
	I0308 03:00:15.198260       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:00:15.198292       1 main.go:227] handling current node
	I0308 03:00:25.209985       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:00:25.210015       1 main.go:227] handling current node
	I0308 03:00:35.213862       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0308 03:00:35.213890       1 main.go:227] handling current node
	
	
	==> kube-apiserver [5842eb5e64906c22f322f9c7479008d71e3fec9ded85f5f20ebeb9580664fe31] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 02:59:04.452074       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 02:59:04.453119       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 02:59:04.477911       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0308 02:59:04.478372       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 02:59:04.870388       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 02:59:05.070138       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.109.15.145"}
	I0308 02:59:05.076314       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0308 02:59:05.154858       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.107.27.219"}
	W0308 02:59:05.446850       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 02:59:06.560672       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.110.24.104"}
	W0308 02:59:34.972445       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 02:59:34.972498       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 02:59:34.972787       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0308 02:59:34.972891       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.231.33:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.231.33:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.231.33:443: connect: connection refused
	E0308 02:59:34.974424       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.231.33:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.231.33:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.231.33:443: connect: connection refused
	E0308 02:59:35.035260       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.231.33:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.231.33:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.231.33:443: connect: connection refused
	I0308 02:59:35.155876       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0308 02:59:40.535497       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0308 03:00:34.845423       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.23:44394: read: connection reset by peer
	I0308 03:00:35.981832       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0308 03:00:38.335948       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0xc00e066930), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0xc009ea2960), ResponseWriter:(*httpsnoop.rw)(0xc009ea2960), Flusher:(*httpsnoop.rw)(0xc009ea2960), CloseNotifier:(*httpsnoop.rw)(0xc009ea2960), Pusher:(*httpsnoop.rw)(0xc009ea2960)}}, encoder:(*versioning.codec)(0xc00f06b5e0), memAllocator:(*runtime.Allocator)(0xc004c26b10)})
	
	
	==> kube-controller-manager [c3cd2cdf16085464c0d0701ed3a9d5fd60c59c4595dadcf1a6f056ca4f60f030] <==
	I0308 03:00:12.175710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="128.827µs"
	I0308 03:00:15.189640       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0308 03:00:16.193055       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0308 03:00:16.211123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-5f6b4f85fd" duration="6.789185ms"
	I0308 03:00:16.211334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-5f6b4f85fd" duration="89.231µs"
	I0308 03:00:16.248579       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0308 03:00:17.199647       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0308 03:00:17.203892       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0308 03:00:17.210220       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0308 03:00:17.214239       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0308 03:00:17.214421       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0308 03:00:19.894442       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0308 03:00:19.894875       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0308 03:00:25.299042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-69cf46c98" duration="12.95µs"
	I0308 03:00:25.650366       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0308 03:00:26.329361       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0308 03:00:26.462213       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0308 03:00:26.462631       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0308 03:00:27.971715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="13.012613ms"
	I0308 03:00:27.971833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="77.103µs"
	I0308 03:00:35.013653       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0308 03:00:35.031527       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0308 03:00:36.758564       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="9.926µs"
	I0308 03:00:38.964329       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="14.999µs"
	I0308 03:00:40.651505       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [33835a3123b7d8932a967695c9208851459006d8f1b2c40a47158a5ebb524058] <==
	I0308 02:58:57.936522       1 server_others.go:69] "Using iptables proxy"
	I0308 02:58:58.450099       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0308 02:58:59.647674       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0308 02:58:59.836808       1 server_others.go:152] "Using iptables Proxier"
	I0308 02:58:59.836953       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0308 02:58:59.837000       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0308 02:58:59.837052       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 02:58:59.837743       1 server.go:846] "Version info" version="v1.28.4"
	I0308 02:58:59.837838       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 02:58:59.840602       1 config.go:188] "Starting service config controller"
	I0308 02:58:59.840684       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 02:58:59.840748       1 config.go:97] "Starting endpoint slice config controller"
	I0308 02:58:59.840776       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 02:58:59.841458       1 config.go:315] "Starting node config controller"
	I0308 02:58:59.841518       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 02:58:59.944667       1 shared_informer.go:318] Caches are synced for node config
	I0308 02:58:59.945856       1 shared_informer.go:318] Caches are synced for service config
	I0308 02:58:59.945958       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [31d138d7bbf75db50d64c6819489eea9d6c30785439df075e4d229475fbc20c2] <==
	E0308 02:58:40.748026       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0308 02:58:40.743115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 02:58:40.748069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 02:58:40.748076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 02:58:40.748099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 02:58:40.748079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 02:58:40.748086       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 02:58:40.748185       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 02:58:40.748270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 02:58:40.748214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 02:58:40.748160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 02:58:40.748331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 02:58:40.748337       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 02:58:40.748342       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 02:58:40.748668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 02:58:40.748682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 02:58:41.568688       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 02:58:41.568717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 02:58:41.698364       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 02:58:41.698406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 02:58:41.731876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 02:58:41.731912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 02:58:41.745518       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 02:58:41.745540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0308 02:58:42.240296       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 03:00:37 addons-096357 kubelet[1665]: I0308 03:00:37.354594    1665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="98f59e67-5cf8-46e5-8bc7-c916aeb9ed3d" path="/var/lib/kubelet/pods/98f59e67-5cf8-46e5-8bc7-c916aeb9ed3d/volumes"
	Mar 08 03:00:37 addons-096357 kubelet[1665]: I0308 03:00:37.354918    1665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f7d4183c-77c2-4528-b752-df447610d59d" path="/var/lib/kubelet/pods/f7d4183c-77c2-4528-b752-df447610d59d/volumes"
	Mar 08 03:00:38 addons-096357 kubelet[1665]: I0308 03:00:38.394188    1665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw5p2\" (UniqueName: \"kubernetes.io/projected/9c6b440d-9d03-4309-9c48-3a14a50d55cf-kube-api-access-rw5p2\") pod \"9c6b440d-9d03-4309-9c48-3a14a50d55cf\" (UID: \"9c6b440d-9d03-4309-9c48-3a14a50d55cf\") "
	Mar 08 03:00:38 addons-096357 kubelet[1665]: I0308 03:00:38.394260    1665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9c6b440d-9d03-4309-9c48-3a14a50d55cf-gcp-creds\") pod \"9c6b440d-9d03-4309-9c48-3a14a50d55cf\" (UID: \"9c6b440d-9d03-4309-9c48-3a14a50d55cf\") "
	Mar 08 03:00:38 addons-096357 kubelet[1665]: I0308 03:00:38.394371    1665 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b440d-9d03-4309-9c48-3a14a50d55cf-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9c6b440d-9d03-4309-9c48-3a14a50d55cf" (UID: "9c6b440d-9d03-4309-9c48-3a14a50d55cf"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 08 03:00:38 addons-096357 kubelet[1665]: I0308 03:00:38.396133    1665 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6b440d-9d03-4309-9c48-3a14a50d55cf-kube-api-access-rw5p2" (OuterVolumeSpecName: "kube-api-access-rw5p2") pod "9c6b440d-9d03-4309-9c48-3a14a50d55cf" (UID: "9c6b440d-9d03-4309-9c48-3a14a50d55cf"). InnerVolumeSpecName "kube-api-access-rw5p2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 08 03:00:38 addons-096357 kubelet[1665]: I0308 03:00:38.495652    1665 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9c6b440d-9d03-4309-9c48-3a14a50d55cf-gcp-creds\") on node \"addons-096357\" DevicePath \"\""
	Mar 08 03:00:38 addons-096357 kubelet[1665]: I0308 03:00:38.495697    1665 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rw5p2\" (UniqueName: \"kubernetes.io/projected/9c6b440d-9d03-4309-9c48-3a14a50d55cf-kube-api-access-rw5p2\") on node \"addons-096357\" DevicePath \"\""
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.201725    1665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjf6f\" (UniqueName: \"kubernetes.io/projected/c865bced-8d68-4fe9-9b58-a387fa5d841b-kube-api-access-vjf6f\") pod \"c865bced-8d68-4fe9-9b58-a387fa5d841b\" (UID: \"c865bced-8d68-4fe9-9b58-a387fa5d841b\") "
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.203470    1665 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c865bced-8d68-4fe9-9b58-a387fa5d841b-kube-api-access-vjf6f" (OuterVolumeSpecName: "kube-api-access-vjf6f") pod "c865bced-8d68-4fe9-9b58-a387fa5d841b" (UID: "c865bced-8d68-4fe9-9b58-a387fa5d841b"). InnerVolumeSpecName "kube-api-access-vjf6f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.290967    1665 scope.go:117] "RemoveContainer" containerID="e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.292370    1665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6be98d1fb9478121a642b2fc03fbb873a4795c1e576f6b57b2e1dc4936f2e4e"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.302288    1665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9sh8\" (UniqueName: \"kubernetes.io/projected/53d1e743-7dde-45d5-8caa-7ac196b37d07-kube-api-access-c9sh8\") pod \"53d1e743-7dde-45d5-8caa-7ac196b37d07\" (UID: \"53d1e743-7dde-45d5-8caa-7ac196b37d07\") "
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.302487    1665 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vjf6f\" (UniqueName: \"kubernetes.io/projected/c865bced-8d68-4fe9-9b58-a387fa5d841b-kube-api-access-vjf6f\") on node \"addons-096357\" DevicePath \"\""
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.304967    1665 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d1e743-7dde-45d5-8caa-7ac196b37d07-kube-api-access-c9sh8" (OuterVolumeSpecName: "kube-api-access-c9sh8") pod "53d1e743-7dde-45d5-8caa-7ac196b37d07" (UID: "53d1e743-7dde-45d5-8caa-7ac196b37d07"). InnerVolumeSpecName "kube-api-access-c9sh8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.307628    1665 scope.go:117] "RemoveContainer" containerID="e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: E0308 03:00:39.308058    1665 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d\": container with ID starting with e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d not found: ID does not exist" containerID="e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.308108    1665 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d"} err="failed to get container status \"e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d\": rpc error: code = NotFound desc = could not find container \"e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d\": container with ID starting with e317781ae79d61c82b236294ca899df2f3d8d10957ef1754de8d19fae2dcce9d not found: ID does not exist"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.308125    1665 scope.go:117] "RemoveContainer" containerID="7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.339434    1665 scope.go:117] "RemoveContainer" containerID="7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: E0308 03:00:39.339854    1665 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7\": container with ID starting with 7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7 not found: ID does not exist" containerID="7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.339903    1665 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7"} err="failed to get container status \"7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7\": rpc error: code = NotFound desc = could not find container \"7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7\": container with ID starting with 7b647460e3469d2de1cb2a17c4ec206c89d7e9938f4d9040dfef812f5160e0d7 not found: ID does not exist"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.354402    1665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9c6b440d-9d03-4309-9c48-3a14a50d55cf" path="/var/lib/kubelet/pods/9c6b440d-9d03-4309-9c48-3a14a50d55cf/volumes"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.355021    1665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c865bced-8d68-4fe9-9b58-a387fa5d841b" path="/var/lib/kubelet/pods/c865bced-8d68-4fe9-9b58-a387fa5d841b/volumes"
	Mar 08 03:00:39 addons-096357 kubelet[1665]: I0308 03:00:39.403515    1665 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c9sh8\" (UniqueName: \"kubernetes.io/projected/53d1e743-7dde-45d5-8caa-7ac196b37d07-kube-api-access-c9sh8\") on node \"addons-096357\" DevicePath \"\""
	
	
	==> storage-provisioner [467ed4f177ffab0a3446f269e31d4c97c1c8454b50e839fedfa72072239fc771] <==
	I0308 02:59:08.236763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 02:59:08.247490       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 02:59:08.247644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 02:59:08.257412       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 02:59:08.257600       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-096357_31364743-c6e2-4087-b1ed-9f4cecf34757!
	I0308 02:59:08.258252       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d333fa28-702d-4e42-b931-a1eef4b74a5a", APIVersion:"v1", ResourceVersion:"872", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-096357_31364743-c6e2-4087-b1ed-9f4cecf34757 became leader
	I0308 02:59:08.358062       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-096357_31364743-c6e2-4087-b1ed-9f4cecf34757!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-096357 -n addons-096357
helpers_test.go:261: (dbg) Run:  kubectl --context addons-096357 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: test-local-path gcp-auth-certs-patch-rnm87 ingress-nginx-admission-create-sgbgl ingress-nginx-admission-patch-fk5tc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-096357 describe pod test-local-path gcp-auth-certs-patch-rnm87 ingress-nginx-admission-create-sgbgl ingress-nginx-admission-patch-fk5tc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-096357 describe pod test-local-path gcp-auth-certs-patch-rnm87 ingress-nginx-admission-create-sgbgl ingress-nginx-admission-patch-fk5tc: exit status 1 (67.609348ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-096357/192.168.49.2
	Start Time:       Fri, 08 Mar 2024 03:00:34 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  cri-o://1b1708315e8c58cd09ddc63ed45d61f09caabc0dcde3ec87133f848a555b3096
	    Image:         busybox:stable
	    Image ID:      docker.io/library/busybox@sha256:4be429a5fbb2e71ae7958bfa558bc637cf3a61baf40a708cb8fff532b39e52d0
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 08 Mar 2024 03:00:40 +0000
	      Finished:     Fri, 08 Mar 2024 03:00:40 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qj797 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-qj797:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/test-local-path to addons-096357
	  Normal  Pulling    7s    kubelet            Pulling image "busybox:stable"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "busybox:stable" in 3.534s (5.768s including waiting)
	  Normal  Created    1s    kubelet            Created container busybox
	  Normal  Started    1s    kubelet            Started container busybox

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-rnm87" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-sgbgl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fk5tc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-096357 describe pod test-local-path gcp-auth-certs-patch-rnm87 ingress-nginx-admission-create-sgbgl ingress-nginx-admission-patch-fk5tc: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (2.60s)

                                                
                                    

Test pass (306/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.8
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 17.22
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.2
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 15.29
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.58
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.2
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 1.25
30 TestBinaryMirror 0.73
31 TestOffline 63.95
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 131.96
38 TestAddons/parallel/Registry 19.3
40 TestAddons/parallel/InspektorGadget 10.76
41 TestAddons/parallel/MetricsServer 5.65
42 TestAddons/parallel/HelmTiller 11.4
44 TestAddons/parallel/CSI 87.23
46 TestAddons/parallel/CloudSpanner 5.48
47 TestAddons/parallel/LocalPath 60.16
48 TestAddons/parallel/NvidiaDevicePlugin 6.46
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 12.1
54 TestCertOptions 34.56
55 TestCertExpiration 223.67
57 TestForceSystemdFlag 31.25
58 TestForceSystemdEnv 37.59
60 TestKVMDriverInstallOrUpdate 4.44
64 TestErrorSpam/setup 21.6
65 TestErrorSpam/start 0.6
66 TestErrorSpam/status 0.85
67 TestErrorSpam/pause 1.5
68 TestErrorSpam/unpause 1.46
69 TestErrorSpam/stop 1.38
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 40.8
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 25.78
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.06
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.5
81 TestFunctional/serial/CacheCmd/cache/add_local 2.02
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 40.13
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.41
92 TestFunctional/serial/LogsFileCmd 1.39
93 TestFunctional/serial/InvalidService 4.28
95 TestFunctional/parallel/ConfigCmd 0.54
96 TestFunctional/parallel/DashboardCmd 13.31
97 TestFunctional/parallel/DryRun 0.45
98 TestFunctional/parallel/InternationalLanguage 0.21
99 TestFunctional/parallel/StatusCmd 1.42
103 TestFunctional/parallel/ServiceCmdConnect 13.7
104 TestFunctional/parallel/AddonsCmd 0.21
105 TestFunctional/parallel/PersistentVolumeClaim 40
107 TestFunctional/parallel/SSHCmd 0.63
108 TestFunctional/parallel/CpCmd 1.74
109 TestFunctional/parallel/MySQL 25.46
110 TestFunctional/parallel/FileSync 0.3
111 TestFunctional/parallel/CertSync 1.76
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
119 TestFunctional/parallel/License 0.61
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.21
125 TestFunctional/parallel/ServiceCmd/DeployApp 13.17
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
133 TestFunctional/parallel/ProfileCmd/profile_list 0.37
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
135 TestFunctional/parallel/MountCmd/any-port 9.26
136 TestFunctional/parallel/ServiceCmd/List 0.75
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
139 TestFunctional/parallel/ServiceCmd/Format 0.4
140 TestFunctional/parallel/ServiceCmd/URL 0.46
141 TestFunctional/parallel/Version/short 0.06
142 TestFunctional/parallel/Version/components 0.51
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
147 TestFunctional/parallel/ImageCommands/ImageBuild 3.06
148 TestFunctional/parallel/ImageCommands/Setup 2.07
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.52
150 TestFunctional/parallel/MountCmd/specific-port 1.65
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.55
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.18
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2
158 TestFunctional/parallel/ImageCommands/ImageRemove 2.51
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.81
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.9
161 TestFunctional/delete_addon-resizer_images 0.06
162 TestFunctional/delete_my-image_image 0.01
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestMutliControlPlane/serial/StartCluster 116.78
168 TestMutliControlPlane/serial/DeployApp 5.78
169 TestMutliControlPlane/serial/PingHostFromPods 1.14
170 TestMutliControlPlane/serial/AddWorkerNode 27.47
171 TestMutliControlPlane/serial/NodeLabels 0.07
172 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.62
173 TestMutliControlPlane/serial/CopyFile 15.65
174 TestMutliControlPlane/serial/StopSecondaryNode 12.42
175 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.47
176 TestMutliControlPlane/serial/RestartSecondaryNode 20.64
177 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.46
178 TestMutliControlPlane/serial/RestartClusterKeepsNodes 213.4
179 TestMutliControlPlane/serial/DeleteSecondaryNode 12.49
180 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.46
181 TestMutliControlPlane/serial/StopCluster 35.53
182 TestMutliControlPlane/serial/RestartCluster 107.94
183 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.43
184 TestMutliControlPlane/serial/AddSecondaryNode 41.19
185 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.6
189 TestJSONOutput/start/Command 43.71
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.66
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.57
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.76
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 38.3
215 TestKicCustomNetwork/use_default_bridge_network 26.89
216 TestKicExistingNetwork 26.21
217 TestKicCustomSubnet 26.32
218 TestKicStaticIP 27.23
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 49.63
223 TestMountStart/serial/StartWithMountFirst 8.8
224 TestMountStart/serial/VerifyMountFirst 0.25
225 TestMountStart/serial/StartWithMountSecond 5.72
226 TestMountStart/serial/VerifyMountSecond 0.25
227 TestMountStart/serial/DeleteFirst 1.6
228 TestMountStart/serial/VerifyMountPostDelete 0.24
229 TestMountStart/serial/Stop 1.18
230 TestMountStart/serial/RestartStopped 7.63
231 TestMountStart/serial/VerifyMountPostStop 0.25
234 TestMultiNode/serial/FreshStart2Nodes 75.56
235 TestMultiNode/serial/DeployApp2Nodes 5.06
236 TestMultiNode/serial/PingHostFrom2Pods 0.78
237 TestMultiNode/serial/AddNode 25.36
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.29
240 TestMultiNode/serial/CopyFile 9.1
241 TestMultiNode/serial/StopNode 2.1
242 TestMultiNode/serial/StartAfterStop 8.69
243 TestMultiNode/serial/RestartKeepsNodes 106.25
244 TestMultiNode/serial/DeleteNode 5.36
245 TestMultiNode/serial/StopMultiNode 23.71
246 TestMultiNode/serial/RestartMultiNode 49.48
247 TestMultiNode/serial/ValidateNameConflict 27.14
252 TestPreload 107.32
254 TestScheduledStopUnix 100.28
257 TestInsufficientStorage 13.4
258 TestRunningBinaryUpgrade 61.06
260 TestKubernetesUpgrade 359.01
261 TestMissingContainerUpgrade 99.78
263 TestStoppedBinaryUpgrade/Setup 2.56
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/StartWithK8s 33.07
266 TestStoppedBinaryUpgrade/Upgrade 129.45
267 TestNoKubernetes/serial/StartWithStopK8s 12.18
268 TestNoKubernetes/serial/Start 5.85
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
270 TestNoKubernetes/serial/ProfileList 1.15
271 TestNoKubernetes/serial/Stop 1.21
272 TestNoKubernetes/serial/StartNoArgs 7.34
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
281 TestNetworkPlugins/group/false 4.01
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
287 TestPause/serial/Start 51.79
288 TestPause/serial/SecondStartNoReconfiguration 19.84
289 TestPause/serial/Pause 0.79
290 TestPause/serial/VerifyStatus 0.29
291 TestPause/serial/Unpause 0.63
292 TestPause/serial/PauseAgain 0.84
293 TestPause/serial/DeletePaused 2.75
294 TestPause/serial/VerifyDeletedResources 0.66
302 TestNetworkPlugins/group/auto/Start 56.95
303 TestNetworkPlugins/group/kindnet/Start 41.41
304 TestNetworkPlugins/group/auto/KubeletFlags 0.28
305 TestNetworkPlugins/group/auto/NetCatPod 9.18
306 TestNetworkPlugins/group/auto/DNS 0.17
307 TestNetworkPlugins/group/auto/Localhost 0.12
308 TestNetworkPlugins/group/auto/HairPin 0.19
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/Start 68.48
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
312 TestNetworkPlugins/group/kindnet/NetCatPod 10.82
313 TestNetworkPlugins/group/custom-flannel/Start 62.58
314 TestNetworkPlugins/group/kindnet/DNS 0.17
315 TestNetworkPlugins/group/kindnet/Localhost 0.13
316 TestNetworkPlugins/group/kindnet/HairPin 0.19
317 TestNetworkPlugins/group/enable-default-cni/Start 41.66
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.34
320 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
321 TestNetworkPlugins/group/calico/NetCatPod 11.21
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.24
325 TestNetworkPlugins/group/custom-flannel/DNS 0.13
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
328 TestNetworkPlugins/group/calico/DNS 0.12
329 TestNetworkPlugins/group/calico/Localhost 0.11
330 TestNetworkPlugins/group/calico/HairPin 0.13
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
334 TestNetworkPlugins/group/flannel/Start 65.28
335 TestNetworkPlugins/group/bridge/Start 43.2
337 TestStartStop/group/old-k8s-version/serial/FirstStart 136.23
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
339 TestNetworkPlugins/group/bridge/NetCatPod 9.19
340 TestNetworkPlugins/group/bridge/DNS 0.14
341 TestNetworkPlugins/group/bridge/Localhost 0.11
342 TestNetworkPlugins/group/bridge/HairPin 0.11
343 TestNetworkPlugins/group/flannel/ControllerPod 6.01
345 TestStartStop/group/no-preload/serial/FirstStart 65.05
346 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
347 TestNetworkPlugins/group/flannel/NetCatPod 10.17
349 TestStartStop/group/embed-certs/serial/FirstStart 53.26
350 TestNetworkPlugins/group/flannel/DNS 0.17
351 TestNetworkPlugins/group/flannel/Localhost 0.14
352 TestNetworkPlugins/group/flannel/HairPin 0.13
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.47
355 TestStartStop/group/embed-certs/serial/DeployApp 10.26
356 TestStartStop/group/no-preload/serial/DeployApp 11.23
357 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
358 TestStartStop/group/embed-certs/serial/Stop 11.9
359 TestStartStop/group/old-k8s-version/serial/DeployApp 11.39
360 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.81
361 TestStartStop/group/no-preload/serial/Stop 11.83
362 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
363 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
365 TestStartStop/group/embed-certs/serial/SecondStart 275.96
366 TestStartStop/group/old-k8s-version/serial/Stop 11.9
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
368 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.09
369 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
370 TestStartStop/group/no-preload/serial/SecondStart 262.63
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
372 TestStartStop/group/old-k8s-version/serial/SecondStart 136.09
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
374 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.13
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
378 TestStartStop/group/old-k8s-version/serial/Pause 2.58
380 TestStartStop/group/newest-cni/serial/FirstStart 37.65
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
383 TestStartStop/group/newest-cni/serial/Stop 1.19
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
385 TestStartStop/group/newest-cni/serial/SecondStart 12.5
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
389 TestStartStop/group/newest-cni/serial/Pause 2.8
390 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
392 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
393 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
394 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
395 TestStartStop/group/no-preload/serial/Pause 2.65
396 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
398 TestStartStop/group/embed-certs/serial/Pause 2.62
399 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.5
x
+
TestDownloadOnly/v1.20.0/json-events (14.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-728790 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-728790 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.802144352s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-728790
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-728790: exit status 85 (75.575146ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-728790 | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC |          |
	|         | -p download-only-728790        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 02:57:16
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 02:57:16.336272 1252097 out.go:291] Setting OutFile to fd 1 ...
	I0308 02:57:16.336565 1252097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:57:16.336575 1252097 out.go:304] Setting ErrFile to fd 2...
	I0308 02:57:16.336579 1252097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:57:16.336780 1252097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	W0308 02:57:16.336902 1252097 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18333-1245188/.minikube/config/config.json: open /home/jenkins/minikube-integration/18333-1245188/.minikube/config/config.json: no such file or directory
	I0308 02:57:16.337478 1252097 out.go:298] Setting JSON to true
	I0308 02:57:16.338501 1252097 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":20383,"bootTime":1709846254,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 02:57:16.338581 1252097 start.go:139] virtualization: kvm guest
	I0308 02:57:16.341063 1252097 out.go:97] [download-only-728790] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 02:57:16.342465 1252097 out.go:169] MINIKUBE_LOCATION=18333
	I0308 02:57:16.341160 1252097 notify.go:220] Checking for updates...
	W0308 02:57:16.341192 1252097 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball: no such file or directory
	I0308 02:57:16.345035 1252097 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 02:57:16.346317 1252097 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	I0308 02:57:16.347636 1252097 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	I0308 02:57:16.348838 1252097 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0308 02:57:16.351108 1252097 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0308 02:57:16.351298 1252097 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 02:57:16.372618 1252097 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0308 02:57:16.372783 1252097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 02:57:16.423153 1252097 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2024-03-08 02:57:16.414271123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 02:57:16.423260 1252097 docker.go:295] overlay module found
	I0308 02:57:16.425087 1252097 out.go:97] Using the docker driver based on user configuration
	I0308 02:57:16.425109 1252097 start.go:297] selected driver: docker
	I0308 02:57:16.425114 1252097 start.go:901] validating driver "docker" against <nil>
	I0308 02:57:16.425196 1252097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 02:57:16.471955 1252097 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2024-03-08 02:57:16.463340511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 02:57:16.472161 1252097 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 02:57:16.473025 1252097 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0308 02:57:16.473244 1252097 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0308 02:57:16.475222 1252097 out.go:169] Using Docker driver with root privileges
	I0308 02:57:16.476547 1252097 cni.go:84] Creating CNI manager for ""
	I0308 02:57:16.476568 1252097 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0308 02:57:16.476577 1252097 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0308 02:57:16.476661 1252097 start.go:340] cluster config:
	{Name:download-only-728790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-728790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 02:57:16.478090 1252097 out.go:97] Starting "download-only-728790" primary control-plane node in "download-only-728790" cluster
	I0308 02:57:16.478110 1252097 cache.go:121] Beginning downloading kic base image for docker with crio
	I0308 02:57:16.479254 1252097 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0308 02:57:16.479288 1252097 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 02:57:16.479366 1252097 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0308 02:57:16.494094 1252097 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0308 02:57:16.494265 1252097 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0308 02:57:16.494362 1252097 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0308 02:57:16.586010 1252097 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 02:57:16.586042 1252097 cache.go:56] Caching tarball of preloaded images
	I0308 02:57:16.586167 1252097 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 02:57:16.588161 1252097 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0308 02:57:16.588188 1252097 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0308 02:57:16.702494 1252097 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-728790 host does not exist
	  To start a cluster, run: "minikube start -p download-only-728790"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-728790
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (17.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-338197 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-338197 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (17.220077691s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (17.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-338197
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-338197: exit status 85 (73.512898ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-728790 | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC |                     |
	|         | -p download-only-728790        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC | 08 Mar 24 02:57 UTC |
	| delete  | -p download-only-728790        | download-only-728790 | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC | 08 Mar 24 02:57 UTC |
	| start   | -o=json --download-only        | download-only-338197 | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC |                     |
	|         | -p download-only-338197        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 02:57:31
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 02:57:31.561098 1252405 out.go:291] Setting OutFile to fd 1 ...
	I0308 02:57:31.561208 1252405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:57:31.561216 1252405 out.go:304] Setting ErrFile to fd 2...
	I0308 02:57:31.561221 1252405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:57:31.561442 1252405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 02:57:31.562037 1252405 out.go:298] Setting JSON to true
	I0308 02:57:31.563032 1252405 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":20398,"bootTime":1709846254,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 02:57:31.563098 1252405 start.go:139] virtualization: kvm guest
	I0308 02:57:31.565465 1252405 out.go:97] [download-only-338197] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 02:57:31.566843 1252405 out.go:169] MINIKUBE_LOCATION=18333
	I0308 02:57:31.565659 1252405 notify.go:220] Checking for updates...
	I0308 02:57:31.569283 1252405 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 02:57:31.570553 1252405 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	I0308 02:57:31.571749 1252405 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	I0308 02:57:31.572910 1252405 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0308 02:57:31.575198 1252405 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0308 02:57:31.575438 1252405 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 02:57:31.596865 1252405 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0308 02:57:31.596969 1252405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 02:57:31.644775 1252405 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-03-08 02:57:31.635943604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 02:57:31.644892 1252405 docker.go:295] overlay module found
	I0308 02:57:31.646601 1252405 out.go:97] Using the docker driver based on user configuration
	I0308 02:57:31.646638 1252405 start.go:297] selected driver: docker
	I0308 02:57:31.646651 1252405 start.go:901] validating driver "docker" against <nil>
	I0308 02:57:31.646737 1252405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 02:57:31.692425 1252405 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-03-08 02:57:31.684020667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 02:57:31.692662 1252405 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 02:57:31.693148 1252405 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0308 02:57:31.693323 1252405 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0308 02:57:31.695212 1252405 out.go:169] Using Docker driver with root privileges
	I0308 02:57:31.696630 1252405 cni.go:84] Creating CNI manager for ""
	I0308 02:57:31.696648 1252405 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0308 02:57:31.696658 1252405 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0308 02:57:31.696724 1252405 start.go:340] cluster config:
	{Name:download-only-338197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-338197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 02:57:31.698100 1252405 out.go:97] Starting "download-only-338197" primary control-plane node in "download-only-338197" cluster
	I0308 02:57:31.698119 1252405 cache.go:121] Beginning downloading kic base image for docker with crio
	I0308 02:57:31.699296 1252405 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0308 02:57:31.699326 1252405 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 02:57:31.699445 1252405 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0308 02:57:31.715112 1252405 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0308 02:57:31.715241 1252405 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0308 02:57:31.715262 1252405 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0308 02:57:31.715277 1252405 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0308 02:57:31.715292 1252405 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0308 02:57:32.130788 1252405 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 02:57:32.130834 1252405 cache.go:56] Caching tarball of preloaded images
	I0308 02:57:32.131018 1252405 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 02:57:32.132973 1252405 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0308 02:57:32.132994 1252405 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0308 02:57:32.676897 1252405 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 02:57:46.171574 1252405 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0308 02:57:46.171689 1252405 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-338197 host does not exist
	  To start a cluster, run: "minikube start -p download-only-338197"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-338197
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (15.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-564762 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-564762 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.28698644s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (15.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-564762
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-564762: exit status 85 (578.297375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-728790 | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC |                     |
	|         | -p download-only-728790           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC | 08 Mar 24 02:57 UTC |
	| delete  | -p download-only-728790           | download-only-728790 | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC | 08 Mar 24 02:57 UTC |
	| start   | -o=json --download-only           | download-only-338197 | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC |                     |
	|         | -p download-only-338197           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC | 08 Mar 24 02:57 UTC |
	| delete  | -p download-only-338197           | download-only-338197 | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC | 08 Mar 24 02:57 UTC |
	| start   | -o=json --download-only           | download-only-564762 | jenkins | v1.32.0 | 08 Mar 24 02:57 UTC |                     |
	|         | -p download-only-564762           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 02:57:49
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 02:57:49.184356 1252723 out.go:291] Setting OutFile to fd 1 ...
	I0308 02:57:49.184485 1252723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:57:49.184497 1252723 out.go:304] Setting ErrFile to fd 2...
	I0308 02:57:49.184501 1252723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:57:49.184696 1252723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 02:57:49.185244 1252723 out.go:298] Setting JSON to true
	I0308 02:57:49.186209 1252723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":20415,"bootTime":1709846254,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 02:57:49.186274 1252723 start.go:139] virtualization: kvm guest
	I0308 02:57:49.188466 1252723 out.go:97] [download-only-564762] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 02:57:49.189884 1252723 out.go:169] MINIKUBE_LOCATION=18333
	I0308 02:57:49.188612 1252723 notify.go:220] Checking for updates...
	I0308 02:57:49.192269 1252723 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 02:57:49.193399 1252723 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	I0308 02:57:49.194653 1252723 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	I0308 02:57:49.195929 1252723 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0308 02:57:49.198121 1252723 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0308 02:57:49.198357 1252723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 02:57:49.217991 1252723 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0308 02:57:49.218088 1252723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 02:57:49.265935 1252723 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-03-08 02:57:49.256884646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 02:57:49.266054 1252723 docker.go:295] overlay module found
	I0308 02:57:49.267717 1252723 out.go:97] Using the docker driver based on user configuration
	I0308 02:57:49.267742 1252723 start.go:297] selected driver: docker
	I0308 02:57:49.267748 1252723 start.go:901] validating driver "docker" against <nil>
	I0308 02:57:49.267840 1252723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 02:57:49.313223 1252723 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-03-08 02:57:49.304267777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 02:57:49.313874 1252723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 02:57:49.314952 1252723 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0308 02:57:49.315155 1252723 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0308 02:57:49.316895 1252723 out.go:169] Using Docker driver with root privileges
	I0308 02:57:49.318587 1252723 cni.go:84] Creating CNI manager for ""
	I0308 02:57:49.318604 1252723 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0308 02:57:49.318614 1252723 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0308 02:57:49.318685 1252723 start.go:340] cluster config:
	{Name:download-only-564762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-564762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 02:57:49.320059 1252723 out.go:97] Starting "download-only-564762" primary control-plane node in "download-only-564762" cluster
	I0308 02:57:49.320074 1252723 cache.go:121] Beginning downloading kic base image for docker with crio
	I0308 02:57:49.321232 1252723 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0308 02:57:49.321261 1252723 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 02:57:49.321360 1252723 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0308 02:57:49.336155 1252723 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0308 02:57:49.336273 1252723 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0308 02:57:49.336293 1252723 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0308 02:57:49.336300 1252723 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0308 02:57:49.336314 1252723 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0308 02:57:49.752484 1252723 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0308 02:57:49.752523 1252723 cache.go:56] Caching tarball of preloaded images
	I0308 02:57:49.752699 1252723 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 02:57:49.754556 1252723 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0308 02:57:49.754578 1252723 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0308 02:57:49.867273 1252723 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0308 02:58:00.698234 1252723 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0308 02:58:00.698333 1252723 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0308 02:58:01.457579 1252723 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0308 02:58:01.457937 1252723 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/download-only-564762/config.json ...
	I0308 02:58:01.457969 1252723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/download-only-564762/config.json: {Name:mk0da5a0a55947720dec78c4bb14fcbcd02138db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:58:01.458200 1252723 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 02:58:01.458380 1252723 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18333-1245188/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-564762 host does not exist
	  To start a cluster, run: "minikube start -p download-only-564762"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-564762
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.25s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-688312 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-688312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-688312
--- PASS: TestDownloadOnlyKic (1.25s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-887601 --alsologtostderr --binary-mirror http://127.0.0.1:34547 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-887601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-887601
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (63.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-226553 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-226553 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m1.341732928s)
helpers_test.go:175: Cleaning up "offline-crio-226553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-226553
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-226553: (2.605278188s)
--- PASS: TestOffline (63.95s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-096357
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-096357: exit status 85 (64.662037ms)

                                                
                                                
-- stdout --
	* Profile "addons-096357" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-096357"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-096357
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-096357: exit status 85 (63.960668ms)

                                                
                                                
-- stdout --
	* Profile "addons-096357" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-096357"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (131.96s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-096357 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-096357 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m11.959700619s)
--- PASS: TestAddons/Setup (131.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 13.582625ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6xbnd" [c865bced-8d68-4fe9-9b58-a387fa5d841b] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004880125s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b28lv" [53d1e743-7dde-45d5-8caa-7ac196b37d07] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004260931s
addons_test.go:340: (dbg) Run:  kubectl --context addons-096357 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-096357 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-096357 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.506935891s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 ip
2024/03/08 03:00:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.30s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pqs6g" [5ec4e6a2-d9a0-47ad-9815-c79c17c6e999] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004226275s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-096357
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-096357: (5.753052254s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 11.548938ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-tg6kt" [df94e650-b701-42b9-9c86-8d5351621dcb] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004675491s
addons_test.go:415: (dbg) Run:  kubectl --context addons-096357 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.4s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.53419ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-c22n7" [f7d4183c-77c2-4528-b752-df447610d59d] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004903235s
addons_test.go:473: (dbg) Run:  kubectl --context addons-096357 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-096357 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.909133085s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (87.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 13.488323ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-096357 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-096357 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5a71109f-6d9b-4395-84aa-fa2bf1d2ed54] Pending
helpers_test.go:344: "task-pv-pod" [5a71109f-6d9b-4395-84aa-fa2bf1d2ed54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5a71109f-6d9b-4395-84aa-fa2bf1d2ed54] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003434806s
addons_test.go:584: (dbg) Run:  kubectl --context addons-096357 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-096357 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-096357 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-096357 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-096357 delete pod task-pv-pod: (1.250367855s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-096357 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-096357 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-096357 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5c4bece7-abe3-46c5-ad68-a35bd988ca0a] Pending
helpers_test.go:344: "task-pv-pod-restore" [5c4bece7-abe3-46c5-ad68-a35bd988ca0a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5c4bece7-abe3-46c5-ad68-a35bd988ca0a] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003207846s
addons_test.go:626: (dbg) Run:  kubectl --context addons-096357 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-096357 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-096357 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-096357 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.52032914s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (87.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-x848v" [a9522a84-77a5-4921-a0b8-a8c0bd39094b] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003622213s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-096357
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (60.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-096357 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-096357 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-096357 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [56d6e196-34a9-4fc5-a5af-cfe3d3c2ca0f] Pending
helpers_test.go:344: "test-local-path" [56d6e196-34a9-4fc5-a5af-cfe3d3c2ca0f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [56d6e196-34a9-4fc5-a5af-cfe3d3c2ca0f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [56d6e196-34a9-4fc5-a5af-cfe3d3c2ca0f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.002988468s
addons_test.go:891: (dbg) Run:  kubectl --context addons-096357 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 ssh "cat /opt/local-path-provisioner/pvc-30d0ffaf-920e-479b-bbb8-f54aaa1f5b7e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-096357 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-096357 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-096357 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-096357 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.281047111s)
--- PASS: TestAddons/parallel/LocalPath (60.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5zvrf" [0c58fef2-eb9d-48b2-9e64-3481e5407cb2] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004444903s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-096357
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-cfg2l" [fb8273b5-6e47-4953-b012-9ad5340a02f0] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004238627s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-096357 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-096357 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-096357
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-096357: (11.818658781s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-096357
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-096357
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-096357
--- PASS: TestAddons/StoppedEnableDisable (12.10s)

                                                
                                    
x
+
TestCertOptions (34.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-547819 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-547819 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.901404642s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-547819 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-547819 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-547819 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-547819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-547819
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-547819: (1.994811409s)
--- PASS: TestCertOptions (34.56s)

                                                
                                    
x
+
TestCertExpiration (223.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-390778 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-390778 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.359939121s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-390778 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-390778 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.96846297s)
helpers_test.go:175: Cleaning up "cert-expiration-390778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-390778
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-390778: (2.338451065s)
--- PASS: TestCertExpiration (223.67s)

                                                
                                    
x
+
TestForceSystemdFlag (31.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-494695 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-494695 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.277157369s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-494695 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-494695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-494695
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-494695: (3.647025017s)
--- PASS: TestForceSystemdFlag (31.25s)

                                                
                                    
x
+
TestForceSystemdEnv (37.59s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-283091 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-283091 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.301296936s)
helpers_test.go:175: Cleaning up "force-systemd-env-283091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-283091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-283091: (4.288605884s)
--- PASS: TestForceSystemdEnv (37.59s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.44s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.44s)

                                                
                                    
x
+
TestErrorSpam/setup (21.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-535442 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-535442 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-535442 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-535442 --driver=docker  --container-runtime=crio: (21.595897817s)
--- PASS: TestErrorSpam/setup (21.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 start --dry-run
--- PASS: TestErrorSpam/start (0.60s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 unpause
--- PASS: TestErrorSpam/unpause (1.46s)

                                                
                                    
x
+
TestErrorSpam/stop (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 stop: (1.179737606s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535442 --log_dir /tmp/nospam-535442 stop
--- PASS: TestErrorSpam/stop (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18333-1245188/.minikube/files/etc/test/nested/copy/1252085/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892717 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-892717 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (40.798868826s)
--- PASS: TestFunctional/serial/StartWithProxy (40.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892717 --alsologtostderr -v=8
E0308 03:05:19.765100 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:05:19.770948 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:05:19.781175 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:05:19.801452 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:05:19.841650 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:05:19.921972 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:05:20.082380 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:05:20.402977 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:05:21.043782 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:05:22.324776 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:05:24.885426 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-892717 --alsologtostderr -v=8: (25.777148929s)
functional_test.go:659: soft start took 25.778003853s for "functional-892717" cluster.
--- PASS: TestFunctional/serial/SoftStart (25.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-892717 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 cache add registry.k8s.io/pause:3.3
E0308 03:05:30.005642 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-892717 /tmp/TestFunctionalserialCacheCmdcacheadd_local3180394061/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 cache add minikube-local-cache-test:functional-892717
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 cache add minikube-local-cache-test:functional-892717: (1.671607276s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 cache delete minikube-local-cache-test:functional-892717
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-892717
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892717 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (268.452691ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 kubectl -- --context functional-892717 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-892717 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892717 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0308 03:05:40.246275 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:06:00.726820 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-892717 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.134170289s)
functional_test.go:757: restart took 40.134310852s for "functional-892717" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-892717 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 logs: (1.410390873s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 logs --file /tmp/TestFunctionalserialLogsFileCmd1812918857/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 logs --file /tmp/TestFunctionalserialLogsFileCmd1812918857/001/logs.txt: (1.385368504s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.28s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-892717 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-892717
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-892717: exit status 115 (331.916649ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31328 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-892717 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892717 config get cpus: exit status 14 (124.401213ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892717 config get cpus: exit status 14 (87.782271ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-892717 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-892717 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1287575: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892717 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-892717 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.161411ms)

                                                
                                                
-- stdout --
	* [functional-892717] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:06:39.449451 1286722 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:06:39.449639 1286722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:06:39.449658 1286722 out.go:304] Setting ErrFile to fd 2...
	I0308 03:06:39.449666 1286722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:06:39.449906 1286722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 03:06:39.450882 1286722 out.go:298] Setting JSON to false
	I0308 03:06:39.452432 1286722 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":20946,"bootTime":1709846254,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:06:39.452910 1286722 start.go:139] virtualization: kvm guest
	I0308 03:06:39.455190 1286722 out.go:177] * [functional-892717] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:06:39.456826 1286722 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:06:39.456786 1286722 notify.go:220] Checking for updates...
	I0308 03:06:39.458290 1286722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:06:39.459875 1286722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	I0308 03:06:39.461699 1286722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	I0308 03:06:39.463107 1286722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:06:39.464678 1286722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:06:39.466584 1286722 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:06:39.467247 1286722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:06:39.494312 1286722 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0308 03:06:39.494439 1286722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 03:06:39.576569 1286722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:59 SystemTime:2024-03-08 03:06:39.562870438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 03:06:39.576729 1286722 docker.go:295] overlay module found
	I0308 03:06:39.579695 1286722 out.go:177] * Using the docker driver based on existing profile
	I0308 03:06:39.581610 1286722 start.go:297] selected driver: docker
	I0308 03:06:39.581631 1286722 start.go:901] validating driver "docker" against &{Name:functional-892717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-892717 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:06:39.581778 1286722 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:06:39.585641 1286722 out.go:177] 
	W0308 03:06:39.587036 1286722 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0308 03:06:39.588394 1286722 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892717 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892717 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-892717 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (213.079785ms)

                                                
                                                
-- stdout --
	* [functional-892717] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:06:39.238522 1286644 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:06:39.238698 1286644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:06:39.238717 1286644 out.go:304] Setting ErrFile to fd 2...
	I0308 03:06:39.238725 1286644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:06:39.239201 1286644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 03:06:39.239940 1286644 out.go:298] Setting JSON to false
	I0308 03:06:39.241479 1286644 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":20946,"bootTime":1709846254,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:06:39.241570 1286644 start.go:139] virtualization: kvm guest
	I0308 03:06:39.245003 1286644 out.go:177] * [functional-892717] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0308 03:06:39.246733 1286644 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:06:39.246825 1286644 notify.go:220] Checking for updates...
	I0308 03:06:39.248050 1286644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:06:39.249470 1286644 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	I0308 03:06:39.253727 1286644 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	I0308 03:06:39.255436 1286644 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:06:39.256897 1286644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:06:39.258583 1286644 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:06:39.259115 1286644 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:06:39.283533 1286644 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0308 03:06:39.285362 1286644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 03:06:39.332092 1286644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:59 SystemTime:2024-03-08 03:06:39.32310291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 03:06:39.332198 1286644 docker.go:295] overlay module found
	I0308 03:06:39.334894 1286644 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0308 03:06:39.336152 1286644 start.go:297] selected driver: docker
	I0308 03:06:39.336170 1286644 start.go:901] validating driver "docker" against &{Name:functional-892717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-892717 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:06:39.336273 1286644 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:06:39.382382 1286644 out.go:177] 
	W0308 03:06:39.385465 1286644 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0308 03:06:39.387093 1286644 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-892717 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-892717 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-77n64" [ccf03c85-0fc3-43b9-8ad9-ed089bcd5e70] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-77n64" [ccf03c85-0fc3-43b9-8ad9-ed089bcd5e70] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.019355471s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32241
functional_test.go:1671: http://192.168.49.2:32241: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-77n64

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32241
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [eb8356e1-91cf-4510-ad5b-222dbeea13a7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00541029s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-892717 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-892717 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-892717 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-892717 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-892717 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [58a5922e-1159-4fbc-b0d1-6c82affddeb6] Pending
helpers_test.go:344: "sp-pod" [58a5922e-1159-4fbc-b0d1-6c82affddeb6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [58a5922e-1159-4fbc-b0d1-6c82affddeb6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004121159s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-892717 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-892717 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-892717 delete -f testdata/storage-provisioner/pod.yaml: (1.625576594s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-892717 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9bf582da-b680-4eac-8c5f-da7e145e13be] Pending
helpers_test.go:344: "sp-pod" [9bf582da-b680-4eac-8c5f-da7e145e13be] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9bf582da-b680-4eac-8c5f-da7e145e13be] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004344579s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-892717 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh -n functional-892717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 cp functional-892717:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2945623659/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh -n functional-892717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh -n functional-892717 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-892717 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
2024/03/08 03:06:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-859648c796-v5lpb" [ffdc08a2-ecd1-4c05-af45-0dd1efbd7a30] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-v5lpb" [ffdc08a2-ecd1-4c05-af45-0dd1efbd7a30] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.004084917s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-892717 exec mysql-859648c796-v5lpb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-892717 exec mysql-859648c796-v5lpb -- mysql -ppassword -e "show databases;": exit status 1 (110.836957ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-892717 exec mysql-859648c796-v5lpb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1252085/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo cat /etc/test/nested/copy/1252085/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1252085.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo cat /etc/ssl/certs/1252085.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1252085.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo cat /usr/share/ca-certificates/1252085.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/12520852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo cat /etc/ssl/certs/12520852.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/12520852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo cat /usr/share/ca-certificates/12520852.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-892717 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo systemctl is-active docker"
E0308 03:06:41.687372 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892717 ssh "sudo systemctl is-active docker": exit status 1 (334.826953ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892717 ssh "sudo systemctl is-active containerd": exit status 1 (255.460834ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-892717 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-892717 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-892717 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1284174: os: process already finished
helpers_test.go:508: unable to kill pid 1283845: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-892717 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-892717 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-892717 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3b88087a-2dd8-43c4-bc5a-680adc9c31b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3b88087a-2dd8-43c4-bc5a-680adc9c31b5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.00330215s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-892717 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-892717 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-dcsgk" [6b363e16-71e7-4266-be7c-b80ef842d915] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-dcsgk" [6b363e16-71e7-4266-be7c-b80ef842d915] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.028830509s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-892717 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.25.49 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-892717 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "287.734675ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "80.720395ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "378.502381ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "73.400798ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdany-port3362500665/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709867197357020326" to /tmp/TestFunctionalparallelMountCmdany-port3362500665/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709867197357020326" to /tmp/TestFunctionalparallelMountCmdany-port3362500665/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709867197357020326" to /tmp/TestFunctionalparallelMountCmdany-port3362500665/001/test-1709867197357020326
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (436.985067ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  8 03:06 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  8 03:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  8 03:06 test-1709867197357020326
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh cat /mount-9p/test-1709867197357020326
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-892717 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7c6e4ac2-f249-45f2-8efe-a2c42f7fa4df] Pending
helpers_test.go:344: "busybox-mount" [7c6e4ac2-f249-45f2-8efe-a2c42f7fa4df] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7c6e4ac2-f249-45f2-8efe-a2c42f7fa4df] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7c6e4ac2-f249-45f2-8efe-a2c42f7fa4df] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004965596s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-892717 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdany-port3362500665/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 service list -o json
functional_test.go:1490: Took "567.868463ms" to run "out/minikube-linux-amd64 -p functional-892717 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31198
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31198
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-892717 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-892717
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-892717 image ls --format short --alsologtostderr:
I0308 03:07:10.555845 1291642 out.go:291] Setting OutFile to fd 1 ...
I0308 03:07:10.556004 1291642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:07:10.556019 1291642 out.go:304] Setting ErrFile to fd 2...
I0308 03:07:10.556027 1291642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:07:10.556614 1291642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
I0308 03:07:10.558105 1291642 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:07:10.558430 1291642 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:07:10.559007 1291642 cli_runner.go:164] Run: docker container inspect functional-892717 --format={{.State.Status}}
I0308 03:07:10.576670 1291642 ssh_runner.go:195] Run: systemctl --version
I0308 03:07:10.576720 1291642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-892717
I0308 03:07:10.600083 1291642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/functional-892717/id_rsa Username:docker}
I0308 03:07:10.682309 1291642 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-892717 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/library/nginx                 | alpine             | 6913ed9ec8d00 | 44.4MB |
| gcr.io/google-containers/addon-resizer  | functional-892717  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | latest             | e4720093a3c13 | 191MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-892717 image ls --format table --alsologtostderr:
I0308 03:07:10.787790 1291767 out.go:291] Setting OutFile to fd 1 ...
I0308 03:07:10.787947 1291767 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:07:10.787962 1291767 out.go:304] Setting ErrFile to fd 2...
I0308 03:07:10.787969 1291767 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:07:10.788173 1291767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
I0308 03:07:10.788721 1291767 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:07:10.788816 1291767 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:07:10.789196 1291767 cli_runner.go:164] Run: docker container inspect functional-892717 --format={{.State.Status}}
I0308 03:07:10.806730 1291767 ssh_runner.go:195] Run: systemctl --version
I0308 03:07:10.806815 1291767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-892717
I0308 03:07:10.822901 1291767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/functional-892717/id_rsa Username:docker}
I0308 03:07:10.914053 1291767 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-892717 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7b
be25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","regis
try.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":["docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9","docker.io/library/nginx@sha256:cb0953165f59b5cf2227ae979a49a2284956d997fad4ed7a338eebc6aef3e70b"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44394342"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":["docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71","docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865895"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"siz
e":"97846543"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"
247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-892717"],"size":"34114467"},{"id":"56cc512116c8f894f11
ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-892717 image ls --format json --alsologtostderr:
I0308 03:07:10.556105 1291643 out.go:291] Setting OutFile to fd 1 ...
I0308 03:07:10.556379 1291643 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:07:10.556391 1291643 out.go:304] Setting ErrFile to fd 2...
I0308 03:07:10.556397 1291643 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:07:10.556653 1291643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
I0308 03:07:10.557433 1291643 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:07:10.557606 1291643 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:07:10.558161 1291643 cli_runner.go:164] Run: docker container inspect functional-892717 --format={{.State.Status}}
I0308 03:07:10.577460 1291643 ssh_runner.go:195] Run: systemctl --version
I0308 03:07:10.577520 1291643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-892717
I0308 03:07:10.601703 1291643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/functional-892717/id_rsa Username:docker}
I0308 03:07:10.682677 1291643 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-892717 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests:
- docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
repoTags:
- docker.io/library/nginx:latest
size: "190865895"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests:
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
- docker.io/library/nginx@sha256:cb0953165f59b5cf2227ae979a49a2284956d997fad4ed7a338eebc6aef3e70b
repoTags:
- docker.io/library/nginx:alpine
size: "44394342"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-892717
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-892717 image ls --format yaml --alsologtostderr:
I0308 03:07:10.558360 1291644 out.go:291] Setting OutFile to fd 1 ...
I0308 03:07:10.558676 1291644 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:07:10.558688 1291644 out.go:304] Setting ErrFile to fd 2...
I0308 03:07:10.558694 1291644 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:07:10.558975 1291644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
I0308 03:07:10.559555 1291644 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:07:10.559654 1291644 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:07:10.560065 1291644 cli_runner.go:164] Run: docker container inspect functional-892717 --format={{.State.Status}}
I0308 03:07:10.578024 1291644 ssh_runner.go:195] Run: systemctl --version
I0308 03:07:10.578065 1291644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-892717
I0308 03:07:10.600551 1291644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/functional-892717/id_rsa Username:docker}
I0308 03:07:10.682602 1291644 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892717 ssh pgrep buildkitd: exit status 1 (241.322109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image build -t localhost/my-image:functional-892717 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 image build -t localhost/my-image:functional-892717 testdata/build --alsologtostderr: (2.595492882s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-892717 image build -t localhost/my-image:functional-892717 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 14adf4178f4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-892717
--> c49ac6acf4f
Successfully tagged localhost/my-image:functional-892717
c49ac6acf4faca3abe733fa0abc92968b0f4117377e0500768bf72434f1cf79b
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-892717 image build -t localhost/my-image:functional-892717 testdata/build --alsologtostderr:
I0308 03:07:11.029032 1291889 out.go:291] Setting OutFile to fd 1 ...
I0308 03:07:11.029169 1291889 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:07:11.029180 1291889 out.go:304] Setting ErrFile to fd 2...
I0308 03:07:11.029185 1291889 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:07:11.029416 1291889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
I0308 03:07:11.030114 1291889 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:07:11.030710 1291889 config.go:182] Loaded profile config "functional-892717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:07:11.031153 1291889 cli_runner.go:164] Run: docker container inspect functional-892717 --format={{.State.Status}}
I0308 03:07:11.048512 1291889 ssh_runner.go:195] Run: systemctl --version
I0308 03:07:11.048579 1291889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-892717
I0308 03:07:11.065628 1291889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/functional-892717/id_rsa Username:docker}
I0308 03:07:11.145908 1291889 build_images.go:151] Building image from path: /tmp/build.3634182046.tar
I0308 03:07:11.146004 1291889 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0308 03:07:11.154270 1291889 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3634182046.tar
I0308 03:07:11.157465 1291889 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3634182046.tar: stat -c "%s %y" /var/lib/minikube/build/build.3634182046.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3634182046.tar': No such file or directory
I0308 03:07:11.157500 1291889 ssh_runner.go:362] scp /tmp/build.3634182046.tar --> /var/lib/minikube/build/build.3634182046.tar (3072 bytes)
I0308 03:07:11.179163 1291889 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3634182046
I0308 03:07:11.187513 1291889 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3634182046 -xf /var/lib/minikube/build/build.3634182046.tar
I0308 03:07:11.195610 1291889 crio.go:297] Building image: /var/lib/minikube/build/build.3634182046
I0308 03:07:11.195675 1291889 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-892717 /var/lib/minikube/build/build.3634182046 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0308 03:07:13.544283 1291889 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-892717 /var/lib/minikube/build/build.3634182046 --cgroup-manager=cgroupfs: (2.348568706s)
I0308 03:07:13.544364 1291889 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3634182046
I0308 03:07:13.553198 1291889 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3634182046.tar
I0308 03:07:13.561275 1291889 build_images.go:207] Built localhost/my-image:functional-892717 from /tmp/build.3634182046.tar
I0308 03:07:13.561303 1291889 build_images.go:123] succeeded building to: functional-892717
I0308 03:07:13.561307 1291889 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.013641431s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-892717
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image load --daemon gcr.io/google-containers/addon-resizer:functional-892717 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 image load --daemon gcr.io/google-containers/addon-resizer:functional-892717 --alsologtostderr: (5.733848007s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 image ls: (1.780973504s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdspecific-port626192576/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (262.173848ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdspecific-port626192576/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892717 ssh "sudo umount -f /mount-9p": exit status 1 (295.383909ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-892717 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdspecific-port626192576/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2351853424/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2351853424/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2351853424/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T" /mount1: exit status 1 (588.595652ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-892717 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2351853424/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2351853424/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892717 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2351853424/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image load --daemon gcr.io/google-containers/addon-resizer:functional-892717 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 image load --daemon gcr.io/google-containers/addon-resizer:functional-892717 --alsologtostderr: (3.331948502s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.85913823s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-892717
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image load --daemon gcr.io/google-containers/addon-resizer:functional-892717 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 image load --daemon gcr.io/google-containers/addon-resizer:functional-892717 --alsologtostderr: (3.884115744s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image save gcr.io/google-containers/addon-resizer:functional-892717 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 image save gcr.io/google-containers/addon-resizer:functional-892717 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.999230455s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image rm gcr.io/google-containers/addon-resizer:functional-892717 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 image rm gcr.io/google-containers/addon-resizer:functional-892717 --alsologtostderr: (2.258953905s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-892717 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (3.584331654s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-892717
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-892717 image save --daemon gcr.io/google-containers/addon-resizer:functional-892717 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-892717
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-892717
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-892717
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-892717
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (116.78s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-328757 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0308 03:08:03.609324 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-328757 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m56.129581758s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (116.78s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (5.78s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-328757 -- rollout status deployment/busybox: (3.753718419s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-5n27x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-bq2xr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-d22n4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-5n27x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-bq2xr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-d22n4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-5n27x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-bq2xr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-d22n4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (5.78s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-5n27x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-5n27x -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-bq2xr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-bq2xr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-d22n4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328757 -- exec busybox-5b5d89c9d6-d22n4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (27.47s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-328757 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-328757 -v=7 --alsologtostderr: (26.670594362s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (27.47s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-328757 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (15.65s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp testdata/cp-test.txt ha-328757:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3768692047/001/cp-test_ha-328757.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757:/home/docker/cp-test.txt ha-328757-m02:/home/docker/cp-test_ha-328757_ha-328757-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m02 "sudo cat /home/docker/cp-test_ha-328757_ha-328757-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757:/home/docker/cp-test.txt ha-328757-m03:/home/docker/cp-test_ha-328757_ha-328757-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m03 "sudo cat /home/docker/cp-test_ha-328757_ha-328757-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757:/home/docker/cp-test.txt ha-328757-m04:/home/docker/cp-test_ha-328757_ha-328757-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m04 "sudo cat /home/docker/cp-test_ha-328757_ha-328757-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp testdata/cp-test.txt ha-328757-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3768692047/001/cp-test_ha-328757-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m02:/home/docker/cp-test.txt ha-328757:/home/docker/cp-test_ha-328757-m02_ha-328757.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757 "sudo cat /home/docker/cp-test_ha-328757-m02_ha-328757.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m02:/home/docker/cp-test.txt ha-328757-m03:/home/docker/cp-test_ha-328757-m02_ha-328757-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m03 "sudo cat /home/docker/cp-test_ha-328757-m02_ha-328757-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m02:/home/docker/cp-test.txt ha-328757-m04:/home/docker/cp-test_ha-328757-m02_ha-328757-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m04 "sudo cat /home/docker/cp-test_ha-328757-m02_ha-328757-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp testdata/cp-test.txt ha-328757-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3768692047/001/cp-test_ha-328757-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m03:/home/docker/cp-test.txt ha-328757:/home/docker/cp-test_ha-328757-m03_ha-328757.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757 "sudo cat /home/docker/cp-test_ha-328757-m03_ha-328757.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m03:/home/docker/cp-test.txt ha-328757-m02:/home/docker/cp-test_ha-328757-m03_ha-328757-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m02 "sudo cat /home/docker/cp-test_ha-328757-m03_ha-328757-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m03:/home/docker/cp-test.txt ha-328757-m04:/home/docker/cp-test_ha-328757-m03_ha-328757-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m04 "sudo cat /home/docker/cp-test_ha-328757-m03_ha-328757-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp testdata/cp-test.txt ha-328757-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3768692047/001/cp-test_ha-328757-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m04:/home/docker/cp-test.txt ha-328757:/home/docker/cp-test_ha-328757-m04_ha-328757.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757 "sudo cat /home/docker/cp-test_ha-328757-m04_ha-328757.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m04:/home/docker/cp-test.txt ha-328757-m02:/home/docker/cp-test_ha-328757-m04_ha-328757-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m02 "sudo cat /home/docker/cp-test_ha-328757-m04_ha-328757-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 cp ha-328757-m04:/home/docker/cp-test.txt ha-328757-m03:/home/docker/cp-test_ha-328757-m04_ha-328757-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 ssh -n ha-328757-m03 "sudo cat /home/docker/cp-test_ha-328757-m04_ha-328757-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (15.65s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (12.42s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 node stop m02 -v=7 --alsologtostderr
E0308 03:10:19.765029 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-328757 node stop m02 -v=7 --alsologtostderr: (11.798261977s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328757 status -v=7 --alsologtostderr: exit status 7 (622.883181ms)

                                                
                                                
-- stdout --
	ha-328757
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328757-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-328757-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328757-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:10:21.498854 1311629 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:10:21.499069 1311629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:10:21.499083 1311629 out.go:304] Setting ErrFile to fd 2...
	I0308 03:10:21.499087 1311629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:10:21.499326 1311629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 03:10:21.499530 1311629 out.go:298] Setting JSON to false
	I0308 03:10:21.499563 1311629 mustload.go:65] Loading cluster: ha-328757
	I0308 03:10:21.499668 1311629 notify.go:220] Checking for updates...
	I0308 03:10:21.500859 1311629 config.go:182] Loaded profile config "ha-328757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:10:21.500898 1311629 status.go:255] checking status of ha-328757 ...
	I0308 03:10:21.501904 1311629 cli_runner.go:164] Run: docker container inspect ha-328757 --format={{.State.Status}}
	I0308 03:10:21.518638 1311629 status.go:330] ha-328757 host status = "Running" (err=<nil>)
	I0308 03:10:21.518678 1311629 host.go:66] Checking if "ha-328757" exists ...
	I0308 03:10:21.518949 1311629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-328757
	I0308 03:10:21.535310 1311629 host.go:66] Checking if "ha-328757" exists ...
	I0308 03:10:21.535554 1311629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:10:21.535605 1311629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-328757
	I0308 03:10:21.552024 1311629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/ha-328757/id_rsa Username:docker}
	I0308 03:10:21.635395 1311629 ssh_runner.go:195] Run: systemctl --version
	I0308 03:10:21.639556 1311629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:10:21.650328 1311629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 03:10:21.699216 1311629 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:78 SystemTime:2024-03-08 03:10:21.690193181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 03:10:21.699851 1311629 kubeconfig.go:125] found "ha-328757" server: "https://192.168.49.254:8443"
	I0308 03:10:21.699883 1311629 api_server.go:166] Checking apiserver status ...
	I0308 03:10:21.699916 1311629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:10:21.710389 1311629 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1570/cgroup
	I0308 03:10:21.718678 1311629 api_server.go:182] apiserver freezer: "10:freezer:/docker/325ee1bcb2dfb94cec1e8fc07ac36d14e91f68d9f1cf82a16a56b26be83a2331/crio/crio-90a041d08b3980b8c844d0be452f33af18d08c0377aa03755fd7a2660cad5b29"
	I0308 03:10:21.718730 1311629 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/325ee1bcb2dfb94cec1e8fc07ac36d14e91f68d9f1cf82a16a56b26be83a2331/crio/crio-90a041d08b3980b8c844d0be452f33af18d08c0377aa03755fd7a2660cad5b29/freezer.state
	I0308 03:10:21.726378 1311629 api_server.go:204] freezer state: "THAWED"
	I0308 03:10:21.726406 1311629 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0308 03:10:21.730178 1311629 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0308 03:10:21.730199 1311629 status.go:422] ha-328757 apiserver status = Running (err=<nil>)
	I0308 03:10:21.730210 1311629 status.go:257] ha-328757 status: &{Name:ha-328757 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:10:21.730228 1311629 status.go:255] checking status of ha-328757-m02 ...
	I0308 03:10:21.730468 1311629 cli_runner.go:164] Run: docker container inspect ha-328757-m02 --format={{.State.Status}}
	I0308 03:10:21.746906 1311629 status.go:330] ha-328757-m02 host status = "Stopped" (err=<nil>)
	I0308 03:10:21.746928 1311629 status.go:343] host is not running, skipping remaining checks
	I0308 03:10:21.746936 1311629 status.go:257] ha-328757-m02 status: &{Name:ha-328757-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:10:21.746963 1311629 status.go:255] checking status of ha-328757-m03 ...
	I0308 03:10:21.747218 1311629 cli_runner.go:164] Run: docker container inspect ha-328757-m03 --format={{.State.Status}}
	I0308 03:10:21.764218 1311629 status.go:330] ha-328757-m03 host status = "Running" (err=<nil>)
	I0308 03:10:21.764243 1311629 host.go:66] Checking if "ha-328757-m03" exists ...
	I0308 03:10:21.764497 1311629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-328757-m03
	I0308 03:10:21.779908 1311629 host.go:66] Checking if "ha-328757-m03" exists ...
	I0308 03:10:21.780210 1311629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:10:21.780257 1311629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-328757-m03
	I0308 03:10:21.796957 1311629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/ha-328757-m03/id_rsa Username:docker}
	I0308 03:10:21.878843 1311629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:10:21.889520 1311629 kubeconfig.go:125] found "ha-328757" server: "https://192.168.49.254:8443"
	I0308 03:10:21.889552 1311629 api_server.go:166] Checking apiserver status ...
	I0308 03:10:21.889615 1311629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:10:21.899140 1311629 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1430/cgroup
	I0308 03:10:21.907609 1311629 api_server.go:182] apiserver freezer: "10:freezer:/docker/2f8b58bdff9175a4f70489777bda65a144fe32530d2e5d55777dc8f4ba56d87b/crio/crio-100ac8df41cbb4c4ba32bff37f7a7553b4ae388a5ad87cad2bafd92f27faca20"
	I0308 03:10:21.907670 1311629 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2f8b58bdff9175a4f70489777bda65a144fe32530d2e5d55777dc8f4ba56d87b/crio/crio-100ac8df41cbb4c4ba32bff37f7a7553b4ae388a5ad87cad2bafd92f27faca20/freezer.state
	I0308 03:10:21.915018 1311629 api_server.go:204] freezer state: "THAWED"
	I0308 03:10:21.915046 1311629 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0308 03:10:21.918984 1311629 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0308 03:10:21.919005 1311629 status.go:422] ha-328757-m03 apiserver status = Running (err=<nil>)
	I0308 03:10:21.919014 1311629 status.go:257] ha-328757-m03 status: &{Name:ha-328757-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:10:21.919029 1311629 status.go:255] checking status of ha-328757-m04 ...
	I0308 03:10:21.919250 1311629 cli_runner.go:164] Run: docker container inspect ha-328757-m04 --format={{.State.Status}}
	I0308 03:10:21.936371 1311629 status.go:330] ha-328757-m04 host status = "Running" (err=<nil>)
	I0308 03:10:21.936402 1311629 host.go:66] Checking if "ha-328757-m04" exists ...
	I0308 03:10:21.936659 1311629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-328757-m04
	I0308 03:10:21.953668 1311629 host.go:66] Checking if "ha-328757-m04" exists ...
	I0308 03:10:21.953921 1311629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:10:21.953971 1311629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-328757-m04
	I0308 03:10:21.970054 1311629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/ha-328757-m04/id_rsa Username:docker}
	I0308 03:10:22.050774 1311629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:10:22.061303 1311629 status.go:257] ha-328757-m04 status: &{Name:ha-328757-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (12.42s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (20.64s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-328757 node start m02 -v=7 --alsologtostderr: (19.61623107s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (20.64s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.46s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.458181398s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.46s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (213.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-328757 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-328757 -v=7 --alsologtostderr
E0308 03:10:47.449706 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-328757 -v=7 --alsologtostderr: (36.47414363s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-328757 --wait=true -v=7 --alsologtostderr
E0308 03:11:23.765608 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:23.770881 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:23.781142 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:23.801391 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:23.841672 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:23.922003 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:24.082411 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:24.402989 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:25.043995 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:26.324210 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:28.885016 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:34.005440 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:11:44.245715 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:12:04.726817 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:12:45.687098 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:14:07.607701 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-328757 --wait=true -v=7 --alsologtostderr: (2m56.783180191s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-328757
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (213.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (12.49s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-328757 node delete m03 -v=7 --alsologtostderr: (11.752489827s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (12.49s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.46s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.46s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (35.53s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-328757 stop -v=7 --alsologtostderr: (35.413044172s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328757 status -v=7 --alsologtostderr: exit status 7 (112.608206ms)

                                                
                                                
-- stdout --
	ha-328757
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-328757-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-328757-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:15:08.449861 1328748 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:15:08.449985 1328748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:15:08.449995 1328748 out.go:304] Setting ErrFile to fd 2...
	I0308 03:15:08.449999 1328748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:15:08.450200 1328748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 03:15:08.450433 1328748 out.go:298] Setting JSON to false
	I0308 03:15:08.450467 1328748 mustload.go:65] Loading cluster: ha-328757
	I0308 03:15:08.450573 1328748 notify.go:220] Checking for updates...
	I0308 03:15:08.450927 1328748 config.go:182] Loaded profile config "ha-328757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:15:08.450944 1328748 status.go:255] checking status of ha-328757 ...
	I0308 03:15:08.451444 1328748 cli_runner.go:164] Run: docker container inspect ha-328757 --format={{.State.Status}}
	I0308 03:15:08.469451 1328748 status.go:330] ha-328757 host status = "Stopped" (err=<nil>)
	I0308 03:15:08.469488 1328748 status.go:343] host is not running, skipping remaining checks
	I0308 03:15:08.469502 1328748 status.go:257] ha-328757 status: &{Name:ha-328757 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:15:08.469537 1328748 status.go:255] checking status of ha-328757-m02 ...
	I0308 03:15:08.469846 1328748 cli_runner.go:164] Run: docker container inspect ha-328757-m02 --format={{.State.Status}}
	I0308 03:15:08.485783 1328748 status.go:330] ha-328757-m02 host status = "Stopped" (err=<nil>)
	I0308 03:15:08.485802 1328748 status.go:343] host is not running, skipping remaining checks
	I0308 03:15:08.485810 1328748 status.go:257] ha-328757-m02 status: &{Name:ha-328757-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:15:08.485834 1328748 status.go:255] checking status of ha-328757-m04 ...
	I0308 03:15:08.486057 1328748 cli_runner.go:164] Run: docker container inspect ha-328757-m04 --format={{.State.Status}}
	I0308 03:15:08.501618 1328748 status.go:330] ha-328757-m04 host status = "Stopped" (err=<nil>)
	I0308 03:15:08.501641 1328748 status.go:343] host is not running, skipping remaining checks
	I0308 03:15:08.501648 1328748 status.go:257] ha-328757-m04 status: &{Name:ha-328757-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (35.53s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (107.94s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-328757 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0308 03:15:19.765196 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
E0308 03:16:23.764807 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:16:51.447973 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-328757 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m47.21685314s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (107.94s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.43s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.43s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (41.19s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-328757 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-328757 --control-plane -v=7 --alsologtostderr: (40.424869585s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-328757 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (41.19s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.6s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.71s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-957061 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-957061 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (43.708638661s)
--- PASS: TestJSONOutput/start/Command (43.71s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-957061 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-957061 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-957061 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-957061 --output=json --user=testUser: (5.757662119s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-997632 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-997632 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.872312ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db0bff66-b84d-4779-a816-17e7724b1857","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-997632] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e18afb78-bde7-40fc-b8fa-233539c18111","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18333"}}
	{"specversion":"1.0","id":"e1f4463f-ad40-45b4-ba2f-edd56cfa58ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"98645deb-8f6d-4f78-9ed9-9cc358059cc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig"}}
	{"specversion":"1.0","id":"a1497fd9-36c4-416b-b778-2c702f08de60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube"}}
	{"specversion":"1.0","id":"8ee0e8a8-7959-4a7e-ad32-6185ef9370be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e102c045-bf19-4369-89b6-5638d62d54ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ca444fd5-11a1-4206-927a-7b7b50881b0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-997632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-997632
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-489367 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-489367 --network=: (36.212558437s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-489367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-489367
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-489367: (2.066056591s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.30s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-262395 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-262395 --network=bridge: (25.01906792s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-262395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-262395
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-262395: (1.852039347s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.89s)

                                                
                                    
x
+
TestKicExistingNetwork (26.21s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-601493 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-601493 --network=existing-network: (24.233737857s)
helpers_test.go:175: Cleaning up "existing-network-601493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-601493
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-601493: (1.84338773s)
--- PASS: TestKicExistingNetwork (26.21s)

                                                
                                    
x
+
TestKicCustomSubnet (26.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-608583 --subnet=192.168.60.0/24
E0308 03:20:19.764608 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-608583 --subnet=192.168.60.0/24: (24.30100289s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-608583 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-608583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-608583
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-608583: (2.003238311s)
--- PASS: TestKicCustomSubnet (26.32s)

                                                
                                    
x
+
TestKicStaticIP (27.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-323501 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-323501 --static-ip=192.168.200.200: (25.077320172s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-323501 ip
helpers_test.go:175: Cleaning up "static-ip-323501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-323501
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-323501: (2.025653127s)
--- PASS: TestKicStaticIP (27.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.63s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-430532 --driver=docker  --container-runtime=crio
E0308 03:21:23.764799 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-430532 --driver=docker  --container-runtime=crio: (21.605722124s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-433622 --driver=docker  --container-runtime=crio
E0308 03:21:42.811482 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-433622 --driver=docker  --container-runtime=crio: (22.98204203s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-430532
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-433622
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-433622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-433622
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-433622: (1.809738796s)
helpers_test.go:175: Cleaning up "first-430532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-430532
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-430532: (2.201525377s)
--- PASS: TestMinikubeProfile (49.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-179798 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-179798 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.803096145s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-179798 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-201563 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-201563 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.719640151s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-201563 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-179798 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-179798 --alsologtostderr -v=5: (1.599679324s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-201563 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-201563
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-201563: (1.182079419s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-201563
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-201563: (6.625611285s)
--- PASS: TestMountStart/serial/RestartStopped (7.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-201563 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-954588 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-954588 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m15.130233227s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-954588 -- rollout status deployment/busybox: (3.570221118s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- exec busybox-5b5d89c9d6-68q5s -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- exec busybox-5b5d89c9d6-75b59 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- exec busybox-5b5d89c9d6-68q5s -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- exec busybox-5b5d89c9d6-75b59 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- exec busybox-5b5d89c9d6-68q5s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- exec busybox-5b5d89c9d6-75b59 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- exec busybox-5b5d89c9d6-68q5s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- exec busybox-5b5d89c9d6-68q5s -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- exec busybox-5b5d89c9d6-75b59 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954588 -- exec busybox-5b5d89c9d6-75b59 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-954588 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-954588 -v 3 --alsologtostderr: (24.77066599s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-954588 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp testdata/cp-test.txt multinode-954588:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp multinode-954588:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1268167003/001/cp-test_multinode-954588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp multinode-954588:/home/docker/cp-test.txt multinode-954588-m02:/home/docker/cp-test_multinode-954588_multinode-954588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m02 "sudo cat /home/docker/cp-test_multinode-954588_multinode-954588-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp multinode-954588:/home/docker/cp-test.txt multinode-954588-m03:/home/docker/cp-test_multinode-954588_multinode-954588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m03 "sudo cat /home/docker/cp-test_multinode-954588_multinode-954588-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp testdata/cp-test.txt multinode-954588-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp multinode-954588-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1268167003/001/cp-test_multinode-954588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp multinode-954588-m02:/home/docker/cp-test.txt multinode-954588:/home/docker/cp-test_multinode-954588-m02_multinode-954588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588 "sudo cat /home/docker/cp-test_multinode-954588-m02_multinode-954588.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp multinode-954588-m02:/home/docker/cp-test.txt multinode-954588-m03:/home/docker/cp-test_multinode-954588-m02_multinode-954588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m03 "sudo cat /home/docker/cp-test_multinode-954588-m02_multinode-954588-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp testdata/cp-test.txt multinode-954588-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp multinode-954588-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1268167003/001/cp-test_multinode-954588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp multinode-954588-m03:/home/docker/cp-test.txt multinode-954588:/home/docker/cp-test_multinode-954588-m03_multinode-954588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588 "sudo cat /home/docker/cp-test_multinode-954588-m03_multinode-954588.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 cp multinode-954588-m03:/home/docker/cp-test.txt multinode-954588-m02:/home/docker/cp-test_multinode-954588-m03_multinode-954588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 ssh -n multinode-954588-m02 "sudo cat /home/docker/cp-test_multinode-954588-m03_multinode-954588-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-954588 node stop m03: (1.182293571s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-954588 status: exit status 7 (452.594055ms)

                                                
                                                
-- stdout --
	multinode-954588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-954588-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-954588-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-954588 status --alsologtostderr: exit status 7 (460.905058ms)

                                                
                                                
-- stdout --
	multinode-954588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-954588-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-954588-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:24:21.707822 1392505 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:24:21.707939 1392505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:24:21.707949 1392505 out.go:304] Setting ErrFile to fd 2...
	I0308 03:24:21.707953 1392505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:24:21.708190 1392505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 03:24:21.708357 1392505 out.go:298] Setting JSON to false
	I0308 03:24:21.708384 1392505 mustload.go:65] Loading cluster: multinode-954588
	I0308 03:24:21.708491 1392505 notify.go:220] Checking for updates...
	I0308 03:24:21.709179 1392505 config.go:182] Loaded profile config "multinode-954588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:24:21.709212 1392505 status.go:255] checking status of multinode-954588 ...
	I0308 03:24:21.710559 1392505 cli_runner.go:164] Run: docker container inspect multinode-954588 --format={{.State.Status}}
	I0308 03:24:21.727376 1392505 status.go:330] multinode-954588 host status = "Running" (err=<nil>)
	I0308 03:24:21.727416 1392505 host.go:66] Checking if "multinode-954588" exists ...
	I0308 03:24:21.727674 1392505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-954588
	I0308 03:24:21.744625 1392505 host.go:66] Checking if "multinode-954588" exists ...
	I0308 03:24:21.744878 1392505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:24:21.744919 1392505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-954588
	I0308 03:24:21.760961 1392505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33277 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/multinode-954588/id_rsa Username:docker}
	I0308 03:24:21.843231 1392505 ssh_runner.go:195] Run: systemctl --version
	I0308 03:24:21.847348 1392505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:24:21.857685 1392505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 03:24:21.911134 1392505 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:68 SystemTime:2024-03-08 03:24:21.90222011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 03:24:21.911960 1392505 kubeconfig.go:125] found "multinode-954588" server: "https://192.168.67.2:8443"
	I0308 03:24:21.911998 1392505 api_server.go:166] Checking apiserver status ...
	I0308 03:24:21.912042 1392505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:24:21.922271 1392505 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1563/cgroup
	I0308 03:24:21.930562 1392505 api_server.go:182] apiserver freezer: "10:freezer:/docker/f846b7f4c9bcac90e8ea329b85e1e307cdc83407623b8edc2f90e1bce55a3e28/crio/crio-679abe86a5f1fe089f03489e80062a8e57e506771a67ba182c002bd4f834eeb1"
	I0308 03:24:21.930617 1392505 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f846b7f4c9bcac90e8ea329b85e1e307cdc83407623b8edc2f90e1bce55a3e28/crio/crio-679abe86a5f1fe089f03489e80062a8e57e506771a67ba182c002bd4f834eeb1/freezer.state
	I0308 03:24:21.937840 1392505 api_server.go:204] freezer state: "THAWED"
	I0308 03:24:21.937871 1392505 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0308 03:24:21.941564 1392505 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0308 03:24:21.941627 1392505 status.go:422] multinode-954588 apiserver status = Running (err=<nil>)
	I0308 03:24:21.941643 1392505 status.go:257] multinode-954588 status: &{Name:multinode-954588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:24:21.941660 1392505 status.go:255] checking status of multinode-954588-m02 ...
	I0308 03:24:21.941888 1392505 cli_runner.go:164] Run: docker container inspect multinode-954588-m02 --format={{.State.Status}}
	I0308 03:24:21.958106 1392505 status.go:330] multinode-954588-m02 host status = "Running" (err=<nil>)
	I0308 03:24:21.958128 1392505 host.go:66] Checking if "multinode-954588-m02" exists ...
	I0308 03:24:21.958367 1392505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-954588-m02
	I0308 03:24:21.973799 1392505 host.go:66] Checking if "multinode-954588-m02" exists ...
	I0308 03:24:21.974037 1392505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:24:21.974082 1392505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-954588-m02
	I0308 03:24:21.990433 1392505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/18333-1245188/.minikube/machines/multinode-954588-m02/id_rsa Username:docker}
	I0308 03:24:22.078850 1392505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:24:22.089235 1392505 status.go:257] multinode-954588-m02 status: &{Name:multinode-954588-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:24:22.089272 1392505 status.go:255] checking status of multinode-954588-m03 ...
	I0308 03:24:22.089541 1392505 cli_runner.go:164] Run: docker container inspect multinode-954588-m03 --format={{.State.Status}}
	I0308 03:24:22.106068 1392505 status.go:330] multinode-954588-m03 host status = "Stopped" (err=<nil>)
	I0308 03:24:22.106097 1392505 status.go:343] host is not running, skipping remaining checks
	I0308 03:24:22.106106 1392505 status.go:257] multinode-954588-m03 status: &{Name:multinode-954588-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-954588 node start m03 -v=7 --alsologtostderr: (8.058951651s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (106.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-954588
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-954588
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-954588: (24.560564654s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-954588 --wait=true -v=8 --alsologtostderr
E0308 03:25:19.764776 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-954588 --wait=true -v=8 --alsologtostderr: (1m21.566214805s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-954588
--- PASS: TestMultiNode/serial/RestartKeepsNodes (106.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-954588 node delete m03: (4.792528048s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 stop
E0308 03:26:23.765712 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-954588 stop: (23.517362542s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-954588 status: exit status 7 (93.239699ms)

                                                
                                                
-- stdout --
	multinode-954588
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-954588-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-954588 status --alsologtostderr: exit status 7 (96.743882ms)

                                                
                                                
-- stdout --
	multinode-954588
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-954588-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:26:46.075768 1401589 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:26:46.075915 1401589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:26:46.075926 1401589 out.go:304] Setting ErrFile to fd 2...
	I0308 03:26:46.075930 1401589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:26:46.076125 1401589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 03:26:46.076337 1401589 out.go:298] Setting JSON to false
	I0308 03:26:46.076371 1401589 mustload.go:65] Loading cluster: multinode-954588
	I0308 03:26:46.076482 1401589 notify.go:220] Checking for updates...
	I0308 03:26:46.076857 1401589 config.go:182] Loaded profile config "multinode-954588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:26:46.076875 1401589 status.go:255] checking status of multinode-954588 ...
	I0308 03:26:46.077360 1401589 cli_runner.go:164] Run: docker container inspect multinode-954588 --format={{.State.Status}}
	I0308 03:26:46.097062 1401589 status.go:330] multinode-954588 host status = "Stopped" (err=<nil>)
	I0308 03:26:46.097083 1401589 status.go:343] host is not running, skipping remaining checks
	I0308 03:26:46.097091 1401589 status.go:257] multinode-954588 status: &{Name:multinode-954588 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:26:46.097139 1401589 status.go:255] checking status of multinode-954588-m02 ...
	I0308 03:26:46.097382 1401589 cli_runner.go:164] Run: docker container inspect multinode-954588-m02 --format={{.State.Status}}
	I0308 03:26:46.113882 1401589 status.go:330] multinode-954588-m02 host status = "Stopped" (err=<nil>)
	I0308 03:26:46.113902 1401589 status.go:343] host is not running, skipping remaining checks
	I0308 03:26:46.113908 1401589 status.go:257] multinode-954588-m02 status: &{Name:multinode-954588-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-954588 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-954588 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (48.924579828s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954588 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.48s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-954588
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-954588-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-954588-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.532024ms)

                                                
                                                
-- stdout --
	* [multinode-954588-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-954588-m02' is duplicated with machine name 'multinode-954588-m02' in profile 'multinode-954588'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-954588-m03 --driver=docker  --container-runtime=crio
E0308 03:27:46.808628 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-954588-m03 --driver=docker  --container-runtime=crio: (24.896556415s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-954588
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-954588: exit status 80 (271.46324ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-954588 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-954588-m03 already exists in multinode-954588-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-954588-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-954588-m03: (1.828610894s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.14s)

                                                
                                    
x
+
TestPreload (107.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-711278 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-711278 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m10.828174763s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-711278 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-711278 image pull gcr.io/k8s-minikube/busybox: (2.592364208s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-711278
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-711278: (5.770423799s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-711278 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-711278 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (25.585642317s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-711278 image list
helpers_test.go:175: Cleaning up "test-preload-711278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-711278
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-711278: (2.313443194s)
--- PASS: TestPreload (107.32s)

                                                
                                    
x
+
TestScheduledStopUnix (100.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-270778 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-270778 --memory=2048 --driver=docker  --container-runtime=crio: (24.398738977s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270778 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-270778 -n scheduled-stop-270778
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270778 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270778 --cancel-scheduled
E0308 03:30:19.765002 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-270778 -n scheduled-stop-270778
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-270778
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270778 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0308 03:31:23.766893 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-270778
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-270778: exit status 7 (80.870208ms)

                                                
                                                
-- stdout --
	scheduled-stop-270778
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-270778 -n scheduled-stop-270778
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-270778 -n scheduled-stop-270778: exit status 7 (74.888967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-270778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-270778
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-270778: (4.495047011s)
--- PASS: TestScheduledStopUnix (100.28s)

                                                
                                    
x
+
TestInsufficientStorage (13.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-288005 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-288005 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.043030093s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"70fb781a-962f-41f2-ac0a-0f8d92d889cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-288005] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d244f382-1b1f-4435-b9cb-db5f59017d17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18333"}}
	{"specversion":"1.0","id":"f0b5aeb4-dffe-4bf6-aba7-5955ca1c990c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9727a7fa-4805-44a7-9d1e-adbda92bb711","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig"}}
	{"specversion":"1.0","id":"e5f6c153-cd00-43c9-8ee1-6b77559b63a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube"}}
	{"specversion":"1.0","id":"1740305d-0176-43ba-a8fa-863683b8c34d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0557468a-4eb9-466a-b700-8746b2e1971f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f3836eb8-ca35-4cf8-ae44-726a36c65743","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7fcc6261-3c02-47ed-a6d5-42c87f736d0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cfbf3fdc-e118-4638-b436-b44c039f4cd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9f76150-1c50-40e6-950f-e45e7098f3a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"65c6e7c0-941d-4229-a638-d944ddfd5304","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-288005\" primary control-plane node in \"insufficient-storage-288005\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"10e33ea3-84d6-4fbf-93b2-ba0f33caf794","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4553ba28-edf9-4cb8-80e0-4ceba721d660","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2dc0585a-3af4-4524-b551-dbb4af0d69e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-288005 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-288005 --output=json --layout=cluster: exit status 7 (259.744145ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-288005","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-288005","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 03:31:45.497633 1422348 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-288005" does not appear in /home/jenkins/minikube-integration/18333-1245188/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-288005 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-288005 --output=json --layout=cluster: exit status 7 (258.432072ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-288005","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-288005","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 03:31:45.756388 1422446 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-288005" does not appear in /home/jenkins/minikube-integration/18333-1245188/kubeconfig
	E0308 03:31:45.766866 1422446 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/insufficient-storage-288005/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-288005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-288005
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-288005: (1.83547241s)
--- PASS: TestInsufficientStorage (13.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2272027997 start -p running-upgrade-080620 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2272027997 start -p running-upgrade-080620 --memory=2200 --vm-driver=docker  --container-runtime=crio: (25.273548685s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-080620 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-080620 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.427004938s)
helpers_test.go:175: Cleaning up "running-upgrade-080620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-080620
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-080620: (2.441419737s)
--- PASS: TestRunningBinaryUpgrade (61.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (359.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-575190 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-575190 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.750325268s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-575190
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-575190: (1.226445623s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-575190 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-575190 status --format={{.Host}}: exit status 7 (124.082714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-575190 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-575190 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.161873357s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-575190 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-575190 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-575190 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (77.386072ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-575190] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-575190
	    minikube start -p kubernetes-upgrade-575190 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5751902 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-575190 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-575190 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-575190 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.187413774s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-575190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-575190
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-575190: (2.421543605s)
--- PASS: TestKubernetesUpgrade (359.01s)

                                                
                                    
x
+
TestMissingContainerUpgrade (99.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1142110516 start -p missing-upgrade-548366 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1142110516 start -p missing-upgrade-548366 --memory=2200 --driver=docker  --container-runtime=crio: (26.69053482s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-548366
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-548366: (14.605231991s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-548366
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-548366 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-548366 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.975472224s)
helpers_test.go:175: Cleaning up "missing-upgrade-548366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-548366
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-548366: (6.850853035s)
--- PASS: TestMissingContainerUpgrade (99.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-241318 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-241318 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (103.390286ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-241318] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-241318 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-241318 --driver=docker  --container-runtime=crio: (32.685829797s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-241318 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (129.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.507266813 start -p stopped-upgrade-253641 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.507266813 start -p stopped-upgrade-253641 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m40.543002692s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.507266813 -p stopped-upgrade-253641 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.507266813 -p stopped-upgrade-253641 stop: (4.448100177s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-253641 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-253641 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.457976137s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (129.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-241318 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-241318 --no-kubernetes --driver=docker  --container-runtime=crio: (6.55185907s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-241318 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-241318 status -o json: exit status 2 (364.948471ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-241318","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-241318
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-241318: (5.265231094s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-241318 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-241318 --no-kubernetes --driver=docker  --container-runtime=crio: (5.851491273s)
--- PASS: TestNoKubernetes/serial/Start (5.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-241318 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-241318 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.656137ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-241318
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-241318: (1.209723291s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-241318 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-241318 --driver=docker  --container-runtime=crio: (7.336618644s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-241318 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-241318 "sudo systemctl is-active --quiet service kubelet": exit status 1 (256.097107ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-562916 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-562916 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (173.025964ms)

                                                
                                                
-- stdout --
	* [false-562916] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:33:01.392337 1443749 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:33:01.392479 1443749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:33:01.392492 1443749 out.go:304] Setting ErrFile to fd 2...
	I0308 03:33:01.392500 1443749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:33:01.392825 1443749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-1245188/.minikube/bin
	I0308 03:33:01.393684 1443749 out.go:298] Setting JSON to false
	I0308 03:33:01.395392 1443749 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":22528,"bootTime":1709846254,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:33:01.395496 1443749 start.go:139] virtualization: kvm guest
	I0308 03:33:01.398177 1443749 out.go:177] * [false-562916] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:33:01.399758 1443749 notify.go:220] Checking for updates...
	I0308 03:33:01.399774 1443749 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:33:01.401294 1443749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:33:01.402716 1443749 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-1245188/kubeconfig
	I0308 03:33:01.404237 1443749 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-1245188/.minikube
	I0308 03:33:01.405493 1443749 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:33:01.406643 1443749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:33:01.408269 1443749 config.go:182] Loaded profile config "cert-expiration-390778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:33:01.408390 1443749 config.go:182] Loaded profile config "cert-options-547819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:33:01.408508 1443749 config.go:182] Loaded profile config "stopped-upgrade-253641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0308 03:33:01.408628 1443749 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:33:01.432261 1443749 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0308 03:33:01.432388 1443749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0308 03:33:01.484978 1443749 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-08 03:33:01.475605199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0308 03:33:01.485144 1443749 docker.go:295] overlay module found
	I0308 03:33:01.487255 1443749 out.go:177] * Using the docker driver based on user configuration
	I0308 03:33:01.488384 1443749 start.go:297] selected driver: docker
	I0308 03:33:01.488401 1443749 start.go:901] validating driver "docker" against <nil>
	I0308 03:33:01.488423 1443749 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:33:01.490353 1443749 out.go:177] 
	W0308 03:33:01.491433 1443749 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0308 03:33:01.492550 1443749 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-562916 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-562916" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-562916

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562916"

                                                
                                                
----------------------- debugLogs end: false-562916 [took: 3.64817624s] --------------------------------
helpers_test.go:175: Cleaning up "false-562916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-562916
--- PASS: TestNetworkPlugins/group/false (4.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-253641
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestPause/serial/Start (51.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-611152 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-611152 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (51.790081386s)
--- PASS: TestPause/serial/Start (51.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (19.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-611152 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-611152 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.824181697s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (19.84s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-611152 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-611152 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-611152 --output=json --layout=cluster: exit status 2 (289.901472ms)

                                                
                                                
-- stdout --
	{"Name":"pause-611152","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-611152","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-611152 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-611152 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-611152 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-611152 --alsologtostderr -v=5: (2.748172937s)
--- PASS: TestPause/serial/DeletePaused (2.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0308 03:35:19.764332 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-611152
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-611152: exit status 1 (15.465971ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-611152: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (56.950472195s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.41336755s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-562916 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-562916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cj284" [67728220-0fd3-45e7-82ca-5b9c01c721cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cj284" [67728220-0fd3-45e7-82ca-5b9c01c721cf] Running
E0308 03:36:23.765494 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003622976s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-562916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5j7lx" [026594b2-d0b2-4b53-9f87-9d4fd8e68f2c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003977307s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.476838023s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-562916 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-562916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-95zvz" [7af0048e-1793-4a48-a45a-deb66f0a3efb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-95zvz" [7af0048e-1793-4a48-a45a-deb66f0a3efb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004200538s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.580312734s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-562916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (41.659693362s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9x94z" [b2825a09-99fc-4ebf-bf3e-4eb428b21360] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006374898s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-562916 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-562916 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-562916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6h7wh" [1a4fe370-4d8b-4096-b596-170c3f92d3cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6h7wh" [1a4fe370-4d8b-4096-b596-170c3f92d3cc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00377323s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-562916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-npmcx" [86f732a0-902e-4935-9d8b-da39ba22d4e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-npmcx" [86f732a0-902e-4935-9d8b-da39ba22d4e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.058655826s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-562916 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-562916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9wbtd" [33ae064a-07ec-4b69-9d13-a35e79a926b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9wbtd" [33ae064a-07ec-4b69-9d13-a35e79a926b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004214699s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-562916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-562916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-562916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.279726188s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-562916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (43.203496753s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (136.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-482665 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0308 03:38:22.812519 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-482665 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m16.227276571s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (136.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-562916 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-562916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rp7xs" [bda4da93-0c3f-4b1a-8d9d-7361f20f13aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rp7xs" [bda4da93-0c3f-4b1a-8d9d-7361f20f13aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004100174s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-562916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jrfzf" [890def2e-5bdd-46f6-a9b2-457a9c357b16] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003971138s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-215000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-215000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m5.052102655s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-562916 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-562916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6fwrs" [c2b4fe5f-1b6e-47c5-8885-4e2068c681c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6fwrs" [c2b4fe5f-1b6e-47c5-8885-4e2068c681c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005718575s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-213022 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-213022 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (53.261994367s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-562916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-562916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)
E0308 03:44:36.391927 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:44:46.527508 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:46.632773 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:45:07.113202 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:45:19.764318 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-316264 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0308 03:40:19.765221 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/addons-096357/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-316264 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (43.465873289s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-213022 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [10384fef-ec42-4b7d-bc8c-7d7982d0571c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [10384fef-ec42-4b7d-bc8c-7d7982d0571c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003852503s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-213022 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-215000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0602eb61-df74-414c-97e1-9000a42397ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0602eb61-df74-414c-97e1-9000a42397ea] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004015123s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-215000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-213022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-213022 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-213022 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-213022 --alsologtostderr -v=3: (11.904851352s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-482665 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3ad5f949-cd21-41e5-bc45-c86de9b11d72] Pending
helpers_test.go:344: "busybox" [3ad5f949-cd21-41e5-bc45-c86de9b11d72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3ad5f949-cd21-41e5-bc45-c86de9b11d72] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003522669s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-482665 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-215000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-215000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-215000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-215000 --alsologtostderr -v=3: (11.83234663s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-316264 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c62739bf-bf92-4551-a05a-48e844a2f4ad] Pending
helpers_test.go:344: "busybox" [c62739bf-bf92-4551-a05a-48e844a2f4ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c62739bf-bf92-4551-a05a-48e844a2f4ad] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003917046s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-316264 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-482665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-482665 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-213022 -n embed-certs-213022
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-213022 -n embed-certs-213022: exit status 7 (83.450179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-213022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (275.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-213022 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-213022 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (4m35.660292022s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-213022 -n embed-certs-213022
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (275.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-482665 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-482665 --alsologtostderr -v=3: (11.901707654s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-316264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-316264 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-316264 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-316264 --alsologtostderr -v=3: (13.090221175s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215000 -n no-preload-215000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215000 -n no-preload-215000: exit status 7 (83.380738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-215000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-215000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-215000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (4m22.33543437s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215000 -n no-preload-215000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-482665 -n old-k8s-version-482665
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-482665 -n old-k8s-version-482665: exit status 7 (99.999572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-482665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (136.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-482665 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-482665 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m15.781807111s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-482665 -n old-k8s-version-482665
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (136.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-316264 -n default-k8s-diff-port-316264
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-316264 -n default-k8s-diff-port-316264: exit status 7 (102.425123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-316264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-316264 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0308 03:41:17.617296 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:17.622634 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:17.632907 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:17.653635 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:17.693938 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:17.774272 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:17.934400 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:18.255025 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:18.895651 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:20.176214 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:22.736393 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:23.765829 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:41:27.856903 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:33.287749 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:33.292995 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:33.303266 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:33.323510 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:33.363785 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:33.444139 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:33.604658 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:33.925248 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:34.565488 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:35.845654 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:38.098094 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:41:38.406515 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:43.526790 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:53.767500 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:41:58.579270 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:42:14.248446 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:42:39.540334 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:42:42.854277 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:42.859556 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:42.869828 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:42.890143 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:42.930448 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:43.010749 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:43.171146 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:43.491776 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:44.132276 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:45.412769 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:47.973426 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:49.600303 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:49.605543 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:49.615782 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:49.636029 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:49.676309 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:49.756661 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:49.917128 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:50.237711 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:50.878755 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:52.159822 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:52.684281 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:52.689544 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:52.699829 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:52.720053 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:52.760320 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:52.840642 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:53.001450 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:53.093668 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:42:53.322225 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:53.963339 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:54.720157 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:42:55.209062 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:42:55.244247 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:57.804965 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:42:59.840629 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:43:02.925869 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
E0308 03:43:03.334388 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:43:10.081118 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
E0308 03:43:13.166819 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-316264 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (4m22.815669521s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-316264 -n default-k8s-diff-port-316264
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-cgftz" [165e06ad-aaed-4289-8a68-b0861d613469] Running
E0308 03:43:23.814750 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003877714s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-cgftz" [165e06ad-aaed-4289-8a68-b0861d613469] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003683575s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-482665 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-482665 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-482665 --alsologtostderr -v=1
E0308 03:43:30.561932 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-482665 -n old-k8s-version-482665
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-482665 -n old-k8s-version-482665: exit status 2 (290.940226ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-482665 -n old-k8s-version-482665
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-482665 -n old-k8s-version-482665: exit status 2 (292.886109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-482665 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-482665 -n old-k8s-version-482665
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-482665 -n old-k8s-version-482665
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-028073 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0308 03:44:01.461155 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/auto-562916/client.crt: no such file or directory
E0308 03:44:04.775943 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
E0308 03:44:05.564221 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:05.569477 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:05.579750 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:05.600026 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:05.640330 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:05.720669 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:05.881768 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:06.202494 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:06.842890 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:08.123590 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:10.684270 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:11.523055 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-028073 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (37.64985641s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-028073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-028073 --alsologtostderr -v=3
E0308 03:44:14.607270 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/enable-default-cni-562916/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-028073 --alsologtostderr -v=3: (1.194679825s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-028073 -n newest-cni-028073
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-028073 -n newest-cni-028073: exit status 7 (77.090657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-028073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-028073 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0308 03:44:15.805417 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:17.129223 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/kindnet-562916/client.crt: no such file or directory
E0308 03:44:26.046567 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
E0308 03:44:26.150954 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:44:26.156225 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:44:26.166512 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:44:26.186779 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:44:26.227036 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:44:26.307425 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:44:26.467827 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:44:26.788394 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
E0308 03:44:26.809691 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/functional-892717/client.crt: no such file or directory
E0308 03:44:27.429266 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-028073 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (12.184913544s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-028073 -n newest-cni-028073
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-028073 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-028073 --alsologtostderr -v=1
E0308 03:44:28.709660 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/flannel-562916/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-028073 -n newest-cni-028073
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-028073 -n newest-cni-028073: exit status 2 (289.012911ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-028073 -n newest-cni-028073
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-028073 -n newest-cni-028073: exit status 2 (298.914037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-028073 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-028073 -n newest-cni-028073
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-028073 -n newest-cni-028073
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4qxxw" [f49387dd-8ffb-40fe-8f43-acf54ae5e7e4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004005669s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-f2k2b" [dcb76602-f05b-4111-b236-5836c4337cf0] Running
E0308 03:45:26.696956 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/calico-562916/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004281073s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4qxxw" [f49387dd-8ffb-40fe-8f43-acf54ae5e7e4] Running
E0308 03:45:27.488680 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/bridge-562916/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004386612s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-215000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-f2k2b" [dcb76602-f05b-4111-b236-5836c4337cf0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004026481s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-213022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-215000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-215000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-215000 -n no-preload-215000
E0308 03:45:33.443405 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/custom-flannel-562916/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-215000 -n no-preload-215000: exit status 2 (283.541782ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-215000 -n no-preload-215000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-215000 -n no-preload-215000: exit status 2 (291.179857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-215000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-215000 -n no-preload-215000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-215000 -n no-preload-215000
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5222j" [3dab2ade-a7da-40d0-a5bc-56749b0b5eca] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00419463s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-213022 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-213022 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-213022 -n embed-certs-213022
E0308 03:45:38.612919 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/old-k8s-version-482665/client.crt: no such file or directory
E0308 03:45:38.618173 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/old-k8s-version-482665/client.crt: no such file or directory
E0308 03:45:38.628415 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/old-k8s-version-482665/client.crt: no such file or directory
E0308 03:45:38.648680 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/old-k8s-version-482665/client.crt: no such file or directory
E0308 03:45:38.688991 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/old-k8s-version-482665/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-213022 -n embed-certs-213022: exit status 2 (280.864728ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-213022 -n embed-certs-213022
E0308 03:45:38.769666 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/old-k8s-version-482665/client.crt: no such file or directory
E0308 03:45:38.930053 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/old-k8s-version-482665/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-213022 -n embed-certs-213022: exit status 2 (277.376977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-213022 --alsologtostderr -v=1
E0308 03:45:39.250397 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/old-k8s-version-482665/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-213022 -n embed-certs-213022
E0308 03:45:39.891406 1252085 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-1245188/.minikube/profiles/old-k8s-version-482665/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-213022 -n embed-certs-213022
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5222j" [3dab2ade-a7da-40d0-a5bc-56749b0b5eca] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003165965s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-316264 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-316264 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-316264 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-316264 -n default-k8s-diff-port-316264
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-316264 -n default-k8s-diff-port-316264: exit status 2 (289.23236ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-316264 -n default-k8s-diff-port-316264
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-316264 -n default-k8s-diff-port-316264: exit status 2 (276.744938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-316264 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-316264 -n default-k8s-diff-port-316264
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-316264 -n default-k8s-diff-port-316264
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.50s)

                                                
                                    

Test skip (27/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-562916 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-562916" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-562916

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562916"

                                                
                                                
----------------------- debugLogs end: kubenet-562916 [took: 4.721149045s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-562916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-562916
--- SKIP: TestNetworkPlugins/group/kubenet (4.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-562916 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-562916" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-562916

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-562916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562916"

                                                
                                                
----------------------- debugLogs end: cilium-562916 [took: 5.492153523s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-562916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-562916
--- SKIP: TestNetworkPlugins/group/cilium (5.69s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-827114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-827114
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard